Test Report: KVM_Linux_crio 19875

                    
                      9b6a7d882f95daeab36015d5b0633b1bcea3cc50:2024-10-28:36842
                    
                

Test fail (31/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 155.89
38 TestAddons/parallel/MetricsServer 366.08
47 TestAddons/StoppedEnableDisable 154.45
166 TestMultiControlPlane/serial/StopSecondaryNode 141.37
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.62
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.29
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.45
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 365.87
173 TestMultiControlPlane/serial/StopCluster 141.89
233 TestMultiNode/serial/RestartKeepsNodes 327.43
235 TestMultiNode/serial/StopMultiNode 145.11
242 TestPreload 270.8
250 TestKubernetesUpgrade 1173.67
292 TestStartStop/group/old-k8s-version/serial/FirstStart 292.57
300 TestStartStop/group/embed-certs/serial/Stop 139.13
303 TestStartStop/group/no-preload/serial/Stop 139.14
304 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 93.09
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/old-k8s-version/serial/SecondStart 761.43
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.96
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 541.94
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.36
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 430.47
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 386.9
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 107.98
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 541.98
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 466.26
x
+
TestAddons/parallel/Ingress (155.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-558164 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-558164 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-558164 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [87e4099c-e1d5-4974-ab0b-e2de82c733dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [87e4099c-e1d5-4974-ab0b-e2de82c733dc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.031904261s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-558164 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.201181084s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-558164 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.31
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-558164 -n addons-558164
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 logs -n 25: (1.138810666s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| delete  | -p download-only-165595                                                                     | download-only-165595 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| delete  | -p download-only-618409                                                                     | download-only-618409 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| delete  | -p download-only-165595                                                                     | download-only-165595 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-029933 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | binary-mirror-029933                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41615                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-029933                                                                     | binary-mirror-029933 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| addons  | enable dashboard -p                                                                         | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | addons-558164                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | addons-558164                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-558164 --wait=true                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | -p addons-558164                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-558164 ip                                                                            | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:40 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-558164 ssh cat                                                                       | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | /opt/local-path-provisioner/pvc-ebacc6ce-c961-47ab-93f4-2185834202e1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-558164 ssh curl -s                                                                   | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-558164 ip                                                                            | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:37:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:37:13.141222   85575 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:37:13.141449   85575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:13.141469   85575 out.go:358] Setting ErrFile to fd 2...
	I1028 11:37:13.141476   85575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:13.141958   85575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:37:13.142565   85575 out.go:352] Setting JSON to false
	I1028 11:37:13.143368   85575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4783,"bootTime":1730110650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:37:13.143459   85575 start.go:139] virtualization: kvm guest
	I1028 11:37:13.145363   85575 out.go:177] * [addons-558164] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:37:13.146623   85575 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:37:13.146625   85575 notify.go:220] Checking for updates...
	I1028 11:37:13.148035   85575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:37:13.149318   85575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:37:13.150556   85575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:13.151784   85575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:37:13.153033   85575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:37:13.154683   85575 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:37:13.186107   85575 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:37:13.187318   85575 start.go:297] selected driver: kvm2
	I1028 11:37:13.187329   85575 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:37:13.187339   85575 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:37:13.188069   85575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:13.188145   85575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:37:13.202560   85575 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:37:13.202611   85575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:37:13.202894   85575 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:37:13.202933   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:13.202995   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:13.203008   85575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 11:37:13.203062   85575 start.go:340] cluster config:
	{Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:37:13.203189   85575 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:13.204774   85575 out.go:177] * Starting "addons-558164" primary control-plane node in "addons-558164" cluster
	I1028 11:37:13.205814   85575 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:13.205847   85575 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:37:13.205855   85575 cache.go:56] Caching tarball of preloaded images
	I1028 11:37:13.205931   85575 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:37:13.205945   85575 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:37:13.206228   85575 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json ...
	I1028 11:37:13.206247   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json: {Name:mk21e799f46066ed7eec2f0ed0902ce4db33f071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:13.206380   85575 start.go:360] acquireMachinesLock for addons-558164: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:37:13.206442   85575 start.go:364] duration metric: took 43.994µs to acquireMachinesLock for "addons-558164"
	I1028 11:37:13.206466   85575 start.go:93] Provisioning new machine with config: &{Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:37:13.206523   85575 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:37:13.208790   85575 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 11:37:13.208921   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:13.208967   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:13.222161   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1028 11:37:13.222539   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:13.223213   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:13.223233   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:13.223571   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:13.223759   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:13.223883   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:13.224030   85575 start.go:159] libmachine.API.Create for "addons-558164" (driver="kvm2")
	I1028 11:37:13.224052   85575 client.go:168] LocalClient.Create starting
	I1028 11:37:13.224081   85575 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:37:13.390440   85575 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:37:13.677081   85575 main.go:141] libmachine: Running pre-create checks...
	I1028 11:37:13.677110   85575 main.go:141] libmachine: (addons-558164) Calling .PreCreateCheck
	I1028 11:37:13.677544   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:13.678015   85575 main.go:141] libmachine: Creating machine...
	I1028 11:37:13.678031   85575 main.go:141] libmachine: (addons-558164) Calling .Create
	I1028 11:37:13.678162   85575 main.go:141] libmachine: (addons-558164) Creating KVM machine...
	I1028 11:37:13.679380   85575 main.go:141] libmachine: (addons-558164) DBG | found existing default KVM network
	I1028 11:37:13.680095   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.679930   85597 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1028 11:37:13.680120   85575 main.go:141] libmachine: (addons-558164) DBG | created network xml: 
	I1028 11:37:13.680132   85575 main.go:141] libmachine: (addons-558164) DBG | <network>
	I1028 11:37:13.680144   85575 main.go:141] libmachine: (addons-558164) DBG |   <name>mk-addons-558164</name>
	I1028 11:37:13.680157   85575 main.go:141] libmachine: (addons-558164) DBG |   <dns enable='no'/>
	I1028 11:37:13.680162   85575 main.go:141] libmachine: (addons-558164) DBG |   
	I1028 11:37:13.680168   85575 main.go:141] libmachine: (addons-558164) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:37:13.680174   85575 main.go:141] libmachine: (addons-558164) DBG |     <dhcp>
	I1028 11:37:13.680180   85575 main.go:141] libmachine: (addons-558164) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:37:13.680186   85575 main.go:141] libmachine: (addons-558164) DBG |     </dhcp>
	I1028 11:37:13.680191   85575 main.go:141] libmachine: (addons-558164) DBG |   </ip>
	I1028 11:37:13.680196   85575 main.go:141] libmachine: (addons-558164) DBG |   
	I1028 11:37:13.680202   85575 main.go:141] libmachine: (addons-558164) DBG | </network>
	I1028 11:37:13.680210   85575 main.go:141] libmachine: (addons-558164) DBG | 
	I1028 11:37:13.685189   85575 main.go:141] libmachine: (addons-558164) DBG | trying to create private KVM network mk-addons-558164 192.168.39.0/24...
	I1028 11:37:13.747391   85575 main.go:141] libmachine: (addons-558164) DBG | private KVM network mk-addons-558164 192.168.39.0/24 created
	I1028 11:37:13.747427   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.747359   85597 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:13.747446   85575 main.go:141] libmachine: (addons-558164) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 ...
	I1028 11:37:13.747465   85575 main.go:141] libmachine: (addons-558164) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:37:13.747482   85575 main.go:141] libmachine: (addons-558164) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:37:13.995999   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.995832   85597 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa...
	I1028 11:37:14.093373   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:14.093256   85597 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/addons-558164.rawdisk...
	I1028 11:37:14.093406   85575 main.go:141] libmachine: (addons-558164) DBG | Writing magic tar header
	I1028 11:37:14.093419   85575 main.go:141] libmachine: (addons-558164) DBG | Writing SSH key tar header
	I1028 11:37:14.093513   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:14.093414   85597 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 ...
	I1028 11:37:14.093562   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164
	I1028 11:37:14.093605   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 (perms=drwx------)
	I1028 11:37:14.093626   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:37:14.093638   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:37:14.093649   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:37:14.093671   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:37:14.093685   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:37:14.093702   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:37:14.093719   85575 main.go:141] libmachine: (addons-558164) Creating domain...
	I1028 11:37:14.093730   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:14.093751   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:37:14.093766   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:37:14.093779   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:37:14.093795   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home
	I1028 11:37:14.093817   85575 main.go:141] libmachine: (addons-558164) DBG | Skipping /home - not owner
	I1028 11:37:14.094849   85575 main.go:141] libmachine: (addons-558164) define libvirt domain using xml: 
	I1028 11:37:14.094884   85575 main.go:141] libmachine: (addons-558164) <domain type='kvm'>
	I1028 11:37:14.094894   85575 main.go:141] libmachine: (addons-558164)   <name>addons-558164</name>
	I1028 11:37:14.094901   85575 main.go:141] libmachine: (addons-558164)   <memory unit='MiB'>4000</memory>
	I1028 11:37:14.094908   85575 main.go:141] libmachine: (addons-558164)   <vcpu>2</vcpu>
	I1028 11:37:14.094918   85575 main.go:141] libmachine: (addons-558164)   <features>
	I1028 11:37:14.094926   85575 main.go:141] libmachine: (addons-558164)     <acpi/>
	I1028 11:37:14.094935   85575 main.go:141] libmachine: (addons-558164)     <apic/>
	I1028 11:37:14.094942   85575 main.go:141] libmachine: (addons-558164)     <pae/>
	I1028 11:37:14.094951   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.094958   85575 main.go:141] libmachine: (addons-558164)   </features>
	I1028 11:37:14.094966   85575 main.go:141] libmachine: (addons-558164)   <cpu mode='host-passthrough'>
	I1028 11:37:14.094974   85575 main.go:141] libmachine: (addons-558164)   
	I1028 11:37:14.094989   85575 main.go:141] libmachine: (addons-558164)   </cpu>
	I1028 11:37:14.094997   85575 main.go:141] libmachine: (addons-558164)   <os>
	I1028 11:37:14.095006   85575 main.go:141] libmachine: (addons-558164)     <type>hvm</type>
	I1028 11:37:14.095037   85575 main.go:141] libmachine: (addons-558164)     <boot dev='cdrom'/>
	I1028 11:37:14.095052   85575 main.go:141] libmachine: (addons-558164)     <boot dev='hd'/>
	I1028 11:37:14.095060   85575 main.go:141] libmachine: (addons-558164)     <bootmenu enable='no'/>
	I1028 11:37:14.095069   85575 main.go:141] libmachine: (addons-558164)   </os>
	I1028 11:37:14.095082   85575 main.go:141] libmachine: (addons-558164)   <devices>
	I1028 11:37:14.095091   85575 main.go:141] libmachine: (addons-558164)     <disk type='file' device='cdrom'>
	I1028 11:37:14.095103   85575 main.go:141] libmachine: (addons-558164)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/boot2docker.iso'/>
	I1028 11:37:14.095112   85575 main.go:141] libmachine: (addons-558164)       <target dev='hdc' bus='scsi'/>
	I1028 11:37:14.095120   85575 main.go:141] libmachine: (addons-558164)       <readonly/>
	I1028 11:37:14.095127   85575 main.go:141] libmachine: (addons-558164)     </disk>
	I1028 11:37:14.095136   85575 main.go:141] libmachine: (addons-558164)     <disk type='file' device='disk'>
	I1028 11:37:14.095148   85575 main.go:141] libmachine: (addons-558164)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:37:14.095161   85575 main.go:141] libmachine: (addons-558164)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/addons-558164.rawdisk'/>
	I1028 11:37:14.095174   85575 main.go:141] libmachine: (addons-558164)       <target dev='hda' bus='virtio'/>
	I1028 11:37:14.095207   85575 main.go:141] libmachine: (addons-558164)     </disk>
	I1028 11:37:14.095229   85575 main.go:141] libmachine: (addons-558164)     <interface type='network'>
	I1028 11:37:14.095240   85575 main.go:141] libmachine: (addons-558164)       <source network='mk-addons-558164'/>
	I1028 11:37:14.095249   85575 main.go:141] libmachine: (addons-558164)       <model type='virtio'/>
	I1028 11:37:14.095261   85575 main.go:141] libmachine: (addons-558164)     </interface>
	I1028 11:37:14.095272   85575 main.go:141] libmachine: (addons-558164)     <interface type='network'>
	I1028 11:37:14.095284   85575 main.go:141] libmachine: (addons-558164)       <source network='default'/>
	I1028 11:37:14.095295   85575 main.go:141] libmachine: (addons-558164)       <model type='virtio'/>
	I1028 11:37:14.095305   85575 main.go:141] libmachine: (addons-558164)     </interface>
	I1028 11:37:14.095315   85575 main.go:141] libmachine: (addons-558164)     <serial type='pty'>
	I1028 11:37:14.095334   85575 main.go:141] libmachine: (addons-558164)       <target port='0'/>
	I1028 11:37:14.095350   85575 main.go:141] libmachine: (addons-558164)     </serial>
	I1028 11:37:14.095375   85575 main.go:141] libmachine: (addons-558164)     <console type='pty'>
	I1028 11:37:14.095414   85575 main.go:141] libmachine: (addons-558164)       <target type='serial' port='0'/>
	I1028 11:37:14.095428   85575 main.go:141] libmachine: (addons-558164)     </console>
	I1028 11:37:14.095435   85575 main.go:141] libmachine: (addons-558164)     <rng model='virtio'>
	I1028 11:37:14.095448   85575 main.go:141] libmachine: (addons-558164)       <backend model='random'>/dev/random</backend>
	I1028 11:37:14.095457   85575 main.go:141] libmachine: (addons-558164)     </rng>
	I1028 11:37:14.095465   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.095473   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.095487   85575 main.go:141] libmachine: (addons-558164)   </devices>
	I1028 11:37:14.095498   85575 main.go:141] libmachine: (addons-558164) </domain>
	I1028 11:37:14.095511   85575 main.go:141] libmachine: (addons-558164) 
	I1028 11:37:14.099642   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:88:22:dc in network default
	I1028 11:37:14.100233   85575 main.go:141] libmachine: (addons-558164) Ensuring networks are active...
	I1028 11:37:14.100252   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:14.100920   85575 main.go:141] libmachine: (addons-558164) Ensuring network default is active
	I1028 11:37:14.101235   85575 main.go:141] libmachine: (addons-558164) Ensuring network mk-addons-558164 is active
	I1028 11:37:14.101734   85575 main.go:141] libmachine: (addons-558164) Getting domain xml...
	I1028 11:37:14.102491   85575 main.go:141] libmachine: (addons-558164) Creating domain...
	I1028 11:37:15.278262   85575 main.go:141] libmachine: (addons-558164) Waiting to get IP...
	I1028 11:37:15.279050   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.279449   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.279505   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.279446   85597 retry.go:31] will retry after 250.712213ms: waiting for machine to come up
	I1028 11:37:15.531891   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.532440   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.532469   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.532382   85597 retry.go:31] will retry after 317.721645ms: waiting for machine to come up
	I1028 11:37:15.851968   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.852430   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.852452   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.852389   85597 retry.go:31] will retry after 416.193792ms: waiting for machine to come up
	I1028 11:37:16.269654   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:16.270164   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:16.270206   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:16.270104   85597 retry.go:31] will retry after 596.082177ms: waiting for machine to come up
	I1028 11:37:16.867870   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:16.868226   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:16.868257   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:16.868167   85597 retry.go:31] will retry after 494.569738ms: waiting for machine to come up
	I1028 11:37:17.364782   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:17.365180   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:17.365211   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:17.365125   85597 retry.go:31] will retry after 705.333219ms: waiting for machine to come up
	I1028 11:37:18.071942   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:18.072306   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:18.072337   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:18.072244   85597 retry.go:31] will retry after 1.035817145s: waiting for machine to come up
	I1028 11:37:19.110041   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:19.110516   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:19.110541   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:19.110475   85597 retry.go:31] will retry after 1.293081461s: waiting for machine to come up
	I1028 11:37:20.405970   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:20.406392   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:20.406424   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:20.406321   85597 retry.go:31] will retry after 1.126472716s: waiting for machine to come up
	I1028 11:37:21.534558   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:21.534916   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:21.534974   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:21.534896   85597 retry.go:31] will retry after 1.87018139s: waiting for machine to come up
	I1028 11:37:23.406775   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:23.407187   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:23.407213   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:23.407138   85597 retry.go:31] will retry after 2.417463202s: waiting for machine to come up
	I1028 11:37:25.827684   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:25.828209   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:25.828238   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:25.828148   85597 retry.go:31] will retry after 2.584942589s: waiting for machine to come up
	I1028 11:37:28.414400   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:28.414749   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:28.414779   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:28.414696   85597 retry.go:31] will retry after 2.884443891s: waiting for machine to come up
	I1028 11:37:31.300952   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:31.301311   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:31.301334   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:31.301273   85597 retry.go:31] will retry after 3.721637101s: waiting for machine to come up
	I1028 11:37:35.024742   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.025083   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has current primary IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.025102   85575 main.go:141] libmachine: (addons-558164) Found IP for machine: 192.168.39.31
	I1028 11:37:35.025117   85575 main.go:141] libmachine: (addons-558164) Reserving static IP address...
	I1028 11:37:35.025876   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find host DHCP lease matching {name: "addons-558164", mac: "52:54:00:8d:cc:de", ip: "192.168.39.31"} in network mk-addons-558164
	I1028 11:37:35.135135   85575 main.go:141] libmachine: (addons-558164) DBG | Getting to WaitForSSH function...
	I1028 11:37:35.135165   85575 main.go:141] libmachine: (addons-558164) Reserved static IP address: 192.168.39.31
	I1028 11:37:35.135177   85575 main.go:141] libmachine: (addons-558164) Waiting for SSH to be available...
	I1028 11:37:35.138161   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.138638   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.138678   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.138909   85575 main.go:141] libmachine: (addons-558164) DBG | Using SSH client type: external
	I1028 11:37:35.138937   85575 main.go:141] libmachine: (addons-558164) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa (-rw-------)
	I1028 11:37:35.139001   85575 main.go:141] libmachine: (addons-558164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:37:35.139028   85575 main.go:141] libmachine: (addons-558164) DBG | About to run SSH command:
	I1028 11:37:35.139042   85575 main.go:141] libmachine: (addons-558164) DBG | exit 0
	I1028 11:37:35.259192   85575 main.go:141] libmachine: (addons-558164) DBG | SSH cmd err, output: <nil>: 
	I1028 11:37:35.259437   85575 main.go:141] libmachine: (addons-558164) KVM machine creation complete!
	I1028 11:37:35.259799   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:35.292576   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:35.292859   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:35.293064   85575 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:37:35.293082   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:35.294472   85575 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:37:35.294486   85575 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:37:35.294491   85575 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:37:35.294498   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.296816   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.297176   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.297203   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.297343   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.297533   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.297690   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.298024   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.298215   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.298492   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.298509   85575 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:37:35.394625   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:37:35.394652   85575 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:37:35.394662   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.397428   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.397774   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.397800   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.398016   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.398179   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.398336   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.398480   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.398643   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.398816   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.398826   85575 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:37:35.491679   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:37:35.491746   85575 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:37:35.491756   85575 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:37:35.491767   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.492003   85575 buildroot.go:166] provisioning hostname "addons-558164"
	I1028 11:37:35.492039   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.492227   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.495011   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.495361   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.495390   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.495551   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.495743   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.495892   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.496022   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.496172   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.496348   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.496365   85575 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-558164 && echo "addons-558164" | sudo tee /etc/hostname
	I1028 11:37:35.603899   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-558164
	
	I1028 11:37:35.603949   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.606744   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.607225   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.607253   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.607425   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.607620   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.607799   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.607922   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.608076   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.608244   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.608264   85575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-558164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-558164/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-558164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:37:35.710980   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:37:35.711019   85575 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:37:35.711052   85575 buildroot.go:174] setting up certificates
	I1028 11:37:35.711070   85575 provision.go:84] configureAuth start
	I1028 11:37:35.711088   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.711352   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:35.714111   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.714470   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.714501   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.714632   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.717095   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.717381   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.717406   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.717530   85575 provision.go:143] copyHostCerts
	I1028 11:37:35.717622   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:37:35.717771   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:37:35.717853   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:37:35.717926   85575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.addons-558164 san=[127.0.0.1 192.168.39.31 addons-558164 localhost minikube]
	I1028 11:37:35.781569   85575 provision.go:177] copyRemoteCerts
	I1028 11:37:35.781633   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:37:35.781659   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.783888   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.784201   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.784229   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.784401   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.784583   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.784742   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.784874   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:35.860777   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:37:35.883519   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:37:35.904258   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:37:35.924625   85575 provision.go:87] duration metric: took 213.535974ms to configureAuth
	I1028 11:37:35.924657   85575 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:37:35.924853   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:35.924941   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.927290   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.927668   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.927691   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.927841   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.928034   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.928173   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.928290   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.928465   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.928667   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.928687   85575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:37:36.147947   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:37:36.147982   85575 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:37:36.147992   85575 main.go:141] libmachine: (addons-558164) Calling .GetURL
	I1028 11:37:36.149469   85575 main.go:141] libmachine: (addons-558164) DBG | Using libvirt version 6000000
	I1028 11:37:36.151424   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.151838   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.151864   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.152014   85575 main.go:141] libmachine: Docker is up and running!
	I1028 11:37:36.152029   85575 main.go:141] libmachine: Reticulating splines...
	I1028 11:37:36.152038   85575 client.go:171] duration metric: took 22.927977306s to LocalClient.Create
	I1028 11:37:36.152061   85575 start.go:167] duration metric: took 22.928033489s to libmachine.API.Create "addons-558164"
	I1028 11:37:36.152080   85575 start.go:293] postStartSetup for "addons-558164" (driver="kvm2")
	I1028 11:37:36.152093   85575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:37:36.152109   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.152344   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:37:36.152371   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.154565   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.154930   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.154963   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.155094   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.155278   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.155459   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.155698   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.233438   85575 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:37:36.237296   85575 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:37:36.237320   85575 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:37:36.237394   85575 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:37:36.237419   85575 start.go:296] duration metric: took 85.331265ms for postStartSetup
	I1028 11:37:36.237457   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:36.238016   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:36.240377   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.240705   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.240732   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.240955   85575 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json ...
	I1028 11:37:36.241167   85575 start.go:128] duration metric: took 23.034632595s to createHost
	I1028 11:37:36.241194   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.244091   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.244450   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.244488   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.244591   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.244780   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.244996   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.245172   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.245329   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:36.245498   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:36.245508   85575 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:37:36.339913   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115456.310135737
	
	I1028 11:37:36.339939   85575 fix.go:216] guest clock: 1730115456.310135737
	I1028 11:37:36.339947   85575 fix.go:229] Guest: 2024-10-28 11:37:36.310135737 +0000 UTC Remote: 2024-10-28 11:37:36.24118174 +0000 UTC m=+23.137199363 (delta=68.953997ms)
	I1028 11:37:36.340002   85575 fix.go:200] guest clock delta is within tolerance: 68.953997ms
	I1028 11:37:36.340011   85575 start.go:83] releasing machines lock for "addons-558164", held for 23.13355684s
	I1028 11:37:36.340036   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.340295   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:36.342913   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.343237   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.343259   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.343506   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344046   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344234   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344348   85575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:37:36.344401   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.344446   85575 ssh_runner.go:195] Run: cat /version.json
	I1028 11:37:36.344473   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.347120   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347316   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347474   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.347498   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347596   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.347718   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.347742   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347784   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.347916   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.348108   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.348116   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.348286   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.348286   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.348406   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.441152   85575 ssh_runner.go:195] Run: systemctl --version
	I1028 11:37:36.446504   85575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:37:36.605420   85575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:37:36.610633   85575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:37:36.610713   85575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:37:36.625351   85575 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:37:36.625384   85575 start.go:495] detecting cgroup driver to use...
	I1028 11:37:36.625456   85575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:37:36.641617   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:37:36.654291   85575 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:37:36.654362   85575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:37:36.666862   85575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:37:36.679447   85575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:37:36.797484   85575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:37:36.953333   85575 docker.go:233] disabling docker service ...
	I1028 11:37:36.953402   85575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:37:36.966757   85575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:37:36.978803   85575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:37:37.089989   85575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:37:37.199847   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:37:37.212373   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:37:37.228120   85575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:37:37.228179   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.237080   85575 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:37:37.237159   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.245953   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.255758   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.264822   85575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:37:37.274024   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.283082   85575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.297348   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.306307   85575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:37:37.314805   85575 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:37:37.314889   85575 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:37:37.326781   85575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:37:37.334931   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:37.443819   85575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:37:37.530697   85575 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:37:37.530800   85575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:37:37.535943   85575 start.go:563] Will wait 60s for crictl version
	I1028 11:37:37.536007   85575 ssh_runner.go:195] Run: which crictl
	I1028 11:37:37.539123   85575 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:37:37.577928   85575 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:37:37.578010   85575 ssh_runner.go:195] Run: crio --version
	I1028 11:37:37.602930   85575 ssh_runner.go:195] Run: crio --version
	I1028 11:37:37.630286   85575 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:37:37.631671   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:37.634296   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:37.634682   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:37.634708   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:37.634885   85575 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:37:37.638588   85575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:37:37.650531   85575 kubeadm.go:883] updating cluster {Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:37:37.650700   85575 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:37.650770   85575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:37:37.680617   85575 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:37:37.680703   85575 ssh_runner.go:195] Run: which lz4
	I1028 11:37:37.684302   85575 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:37:37.688153   85575 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:37:37.688185   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:37:38.818986   85575 crio.go:462] duration metric: took 1.134706489s to copy over tarball
	I1028 11:37:38.819058   85575 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:37:40.812379   85575 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.993288544s)
	I1028 11:37:40.812417   85575 crio.go:469] duration metric: took 1.993400575s to extract the tarball
	I1028 11:37:40.812430   85575 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:37:40.847345   85575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:37:40.888064   85575 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:37:40.888091   85575 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:37:40.888100   85575 kubeadm.go:934] updating node { 192.168.39.31 8443 v1.31.2 crio true true} ...
	I1028 11:37:40.888220   85575 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-558164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:37:40.888286   85575 ssh_runner.go:195] Run: crio config
	I1028 11:37:40.928934   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:40.928963   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:40.928975   85575 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:37:40.928998   85575 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-558164 NodeName:addons-558164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:37:40.929115   85575 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-558164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:37:40.929174   85575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:37:40.937796   85575 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:37:40.937872   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:37:40.945990   85575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:37:40.960178   85575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:37:40.974016   85575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1028 11:37:40.988538   85575 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I1028 11:37:40.991680   85575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:37:41.001997   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:41.121146   85575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:37:41.137478   85575 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164 for IP: 192.168.39.31
	I1028 11:37:41.137524   85575 certs.go:194] generating shared ca certs ...
	I1028 11:37:41.137549   85575 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.137730   85575 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:37:41.323762   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt ...
	I1028 11:37:41.323796   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt: {Name:mkc0b2b57f64ada4d969dda25941c2328582eade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.323973   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key ...
	I1028 11:37:41.323985   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key: {Name:mkd279fafe08c0316b34fd1a2897fb0bb5a048b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.324068   85575 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:37:41.737932   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt ...
	I1028 11:37:41.737964   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt: {Name:mk06bd09f2b619ede58b750d31dd90943c21f399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.738120   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key ...
	I1028 11:37:41.738131   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key: {Name:mk6537beaff0b053e2949ae2b84d3eccb7a6f708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.738197   85575 certs.go:256] generating profile certs ...
	I1028 11:37:41.738281   85575 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key
	I1028 11:37:41.738304   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt with IP's: []
	I1028 11:37:41.926331   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt ...
	I1028 11:37:41.926366   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: {Name:mk977f2dcc9ff37f478f6ba4fe9575f6afa3b18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.926561   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key ...
	I1028 11:37:41.926576   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key: {Name:mkd1f3a0b2154057485d76e9d5fc3969b2573f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.926684   85575 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb
	I1028 11:37:41.926705   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31]
	I1028 11:37:42.277136   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb ...
	I1028 11:37:42.277167   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb: {Name:mk5aa529d00b15c94fe638a9c72f96545f1c3feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.277347   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb ...
	I1028 11:37:42.277364   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb: {Name:mkd0d87bc729a17c13c7130430c8595841656296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.277461   85575 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt
	I1028 11:37:42.277541   85575 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key
	I1028 11:37:42.277586   85575 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key
	I1028 11:37:42.277607   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt with IP's: []
	I1028 11:37:42.480451   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt ...
	I1028 11:37:42.480481   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt: {Name:mkb2e69f56c32095c770f87f4c5341b28506e6dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.480666   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key ...
	I1028 11:37:42.480682   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key: {Name:mk519fae37889f93fa2ec24cc1ac335732e57d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.480887   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:37:42.480922   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:37:42.480948   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:37:42.480972   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:37:42.481579   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:37:42.504552   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:37:42.528760   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:37:42.549633   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:37:42.571625   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 11:37:42.593780   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:37:42.615851   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:37:42.636545   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:37:42.656381   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:37:42.676246   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:37:42.690571   85575 ssh_runner.go:195] Run: openssl version
	I1028 11:37:42.696147   85575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:37:42.709644   85575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.713840   85575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.713910   85575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.720910   85575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:37:42.731927   85575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:37:42.736721   85575 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:37:42.736775   85575 kubeadm.go:392] StartCluster: {Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:37:42.736879   85575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:37:42.736927   85575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:37:42.778782   85575 cri.go:89] found id: ""
	I1028 11:37:42.778859   85575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:37:42.787834   85575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:37:42.796343   85575 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:37:42.804728   85575 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:37:42.804747   85575 kubeadm.go:157] found existing configuration files:
	
	I1028 11:37:42.804798   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:37:42.812654   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:37:42.812709   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:37:42.820889   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:37:42.829261   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:37:42.829315   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:37:42.839320   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:37:42.847009   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:37:42.847054   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:37:42.855175   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:37:42.862972   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:37:42.863017   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:37:42.870991   85575 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:37:43.011545   85575 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:37:53.238133   85575 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:37:53.238238   85575 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:37:53.238337   85575 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:37:53.238483   85575 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:37:53.238600   85575 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:37:53.238661   85575 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:37:53.240225   85575 out.go:235]   - Generating certificates and keys ...
	I1028 11:37:53.240295   85575 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:37:53.240364   85575 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:37:53.240450   85575 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:37:53.240514   85575 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:37:53.240598   85575 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:37:53.240668   85575 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:37:53.240748   85575 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:37:53.240877   85575 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-558164 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1028 11:37:53.240926   85575 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:37:53.241040   85575 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-558164 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1028 11:37:53.241119   85575 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:37:53.241185   85575 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:37:53.241227   85575 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:37:53.241291   85575 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:37:53.241372   85575 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:37:53.241448   85575 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:37:53.241518   85575 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:37:53.241605   85575 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:37:53.241686   85575 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:37:53.241797   85575 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:37:53.241886   85575 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:37:53.243281   85575 out.go:235]   - Booting up control plane ...
	I1028 11:37:53.243361   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:37:53.243458   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:37:53.243575   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:37:53.243691   85575 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:37:53.243770   85575 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:37:53.243805   85575 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:37:53.243916   85575 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:37:53.244006   85575 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:37:53.244056   85575 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.843329ms
	I1028 11:37:53.244116   85575 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:37:53.244165   85575 kubeadm.go:310] [api-check] The API server is healthy after 5.501539291s
	I1028 11:37:53.244252   85575 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:37:53.244371   85575 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:37:53.244429   85575 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:37:53.244584   85575 kubeadm.go:310] [mark-control-plane] Marking the node addons-558164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:37:53.244682   85575 kubeadm.go:310] [bootstrap-token] Using token: p1t5xv.9jomyucun3sgp4xz
	I1028 11:37:53.246786   85575 out.go:235]   - Configuring RBAC rules ...
	I1028 11:37:53.246907   85575 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:37:53.247004   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:37:53.247141   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:37:53.247279   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:37:53.247386   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:37:53.247461   85575 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:37:53.247561   85575 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:37:53.247616   85575 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:37:53.247684   85575 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:37:53.247692   85575 kubeadm.go:310] 
	I1028 11:37:53.247741   85575 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:37:53.247750   85575 kubeadm.go:310] 
	I1028 11:37:53.247856   85575 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:37:53.247869   85575 kubeadm.go:310] 
	I1028 11:37:53.247904   85575 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:37:53.247995   85575 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:37:53.248073   85575 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:37:53.248083   85575 kubeadm.go:310] 
	I1028 11:37:53.248156   85575 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:37:53.248171   85575 kubeadm.go:310] 
	I1028 11:37:53.248246   85575 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:37:53.248260   85575 kubeadm.go:310] 
	I1028 11:37:53.248335   85575 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:37:53.248437   85575 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:37:53.248531   85575 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:37:53.248549   85575 kubeadm.go:310] 
	I1028 11:37:53.248659   85575 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:37:53.248759   85575 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:37:53.248769   85575 kubeadm.go:310] 
	I1028 11:37:53.248870   85575 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p1t5xv.9jomyucun3sgp4xz \
	I1028 11:37:53.248991   85575 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:37:53.249022   85575 kubeadm.go:310] 	--control-plane 
	I1028 11:37:53.249038   85575 kubeadm.go:310] 
	I1028 11:37:53.249179   85575 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:37:53.249198   85575 kubeadm.go:310] 
	I1028 11:37:53.249331   85575 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p1t5xv.9jomyucun3sgp4xz \
	I1028 11:37:53.249503   85575 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:37:53.249524   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:53.249533   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:53.251860   85575 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 11:37:53.253071   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 11:37:53.263814   85575 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 11:37:53.281245   85575 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:37:53.281300   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:53.281348   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-558164 minikube.k8s.io/updated_at=2024_10_28T11_37_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=addons-558164 minikube.k8s.io/primary=true
	I1028 11:37:53.421113   85575 ops.go:34] apiserver oom_adj: -16
	I1028 11:37:53.421212   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:53.921641   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:54.421611   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:54.921346   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:55.422221   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:55.921481   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:56.422289   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:56.922232   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:57.421532   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:57.523884   85575 kubeadm.go:1113] duration metric: took 4.242633176s to wait for elevateKubeSystemPrivileges
	I1028 11:37:57.523927   85575 kubeadm.go:394] duration metric: took 14.787157354s to StartCluster
	I1028 11:37:57.523950   85575 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:57.524080   85575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:37:57.524467   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:57.524678   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:37:57.524687   85575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:37:57.524766   85575 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 11:37:57.524903   85575 addons.go:69] Setting inspektor-gadget=true in profile "addons-558164"
	I1028 11:37:57.524922   85575 addons.go:69] Setting metrics-server=true in profile "addons-558164"
	I1028 11:37:57.524920   85575 addons.go:69] Setting default-storageclass=true in profile "addons-558164"
	I1028 11:37:57.524936   85575 addons.go:234] Setting addon inspektor-gadget=true in "addons-558164"
	I1028 11:37:57.524934   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:57.524945   85575 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-558164"
	I1028 11:37:57.524951   85575 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-558164"
	I1028 11:37:57.524958   85575 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-558164"
	I1028 11:37:57.524959   85575 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-558164"
	I1028 11:37:57.524965   85575 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-558164"
	I1028 11:37:57.524970   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524963   85575 addons.go:69] Setting storage-provisioner=true in profile "addons-558164"
	I1028 11:37:57.524992   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524994   85575 addons.go:69] Setting volcano=true in profile "addons-558164"
	I1028 11:37:57.525005   85575 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-558164"
	I1028 11:37:57.525008   85575 addons.go:234] Setting addon volcano=true in "addons-558164"
	I1028 11:37:57.525016   85575 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-558164"
	I1028 11:37:57.525037   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525049   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525062   85575 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-558164"
	I1028 11:37:57.525094   85575 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-558164"
	I1028 11:37:57.525120   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524902   85575 addons.go:69] Setting yakd=true in profile "addons-558164"
	I1028 11:37:57.525419   85575 addons.go:69] Setting registry=true in profile "addons-558164"
	I1028 11:37:57.525429   85575 addons.go:234] Setting addon yakd=true in "addons-558164"
	I1028 11:37:57.525431   85575 addons.go:234] Setting addon registry=true in "addons-558164"
	I1028 11:37:57.525432   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525438   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525448   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525452   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525471   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525480   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525491   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525520   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525581   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525593   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525602   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525613   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525650   85575 addons.go:69] Setting volumesnapshots=true in profile "addons-558164"
	I1028 11:37:57.525663   85575 addons.go:234] Setting addon volumesnapshots=true in "addons-558164"
	I1028 11:37:57.524936   85575 addons.go:234] Setting addon metrics-server=true in "addons-558164"
	I1028 11:37:57.524933   85575 addons.go:69] Setting cloud-spanner=true in profile "addons-558164"
	I1028 11:37:57.525676   85575 addons.go:69] Setting ingress=true in profile "addons-558164"
	I1028 11:37:57.525682   85575 addons.go:234] Setting addon cloud-spanner=true in "addons-558164"
	I1028 11:37:57.525685   85575 addons.go:234] Setting addon ingress=true in "addons-558164"
	I1028 11:37:57.525696   85575 addons.go:69] Setting gcp-auth=true in profile "addons-558164"
	I1028 11:37:57.525700   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525713   85575 mustload.go:65] Loading cluster: addons-558164
	I1028 11:37:57.525718   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525721   85575 addons.go:69] Setting ingress-dns=true in profile "addons-558164"
	I1028 11:37:57.525732   85575 addons.go:234] Setting addon ingress-dns=true in "addons-558164"
	I1028 11:37:57.524993   85575 addons.go:234] Setting addon storage-provisioner=true in "addons-558164"
	I1028 11:37:57.525849   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525874   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525886   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525902   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525884   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525931   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526095   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:57.526235   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526268   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526305   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526318   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526335   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526348   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526435   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526463   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526304   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526951   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.527039   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.527442   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.527482   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.527535   85575 out.go:177] * Verifying Kubernetes components...
	I1028 11:37:57.528396   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.528790   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.528833   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.529043   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:57.546735   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I1028 11:37:57.546823   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I1028 11:37:57.546911   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I1028 11:37:57.547419   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.547543   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.547655   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I1028 11:37:57.547740   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1028 11:37:57.548023   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.548049   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.548125   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.548155   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.548197   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.548261   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I1028 11:37:57.548540   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.548577   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.548881   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.548902   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.548971   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.549020   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549125   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549402   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549495   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.549532   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.550029   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550044   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550095   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550194   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550204   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550313   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550325   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550422   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550431   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550620   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550854   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550925   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550972   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.551188   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.551372   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.551404   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.551590   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.551647   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.553173   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.553210   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.572094   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I1028 11:37:57.572317   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.572388   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.572594   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.573305   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.573330   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.573812   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.574320   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.574363   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.574611   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I1028 11:37:57.576260   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I1028 11:37:57.576690   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.585427   85575 addons.go:234] Setting addon default-storageclass=true in "addons-558164"
	I1028 11:37:57.585489   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.590207   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1028 11:37:57.590224   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590379   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1028 11:37:57.590443   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I1028 11:37:57.590765   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.590785   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.590902   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590935   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590978   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.591192   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591216   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591683   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.591775   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591776   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591790   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591788   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591792   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591807   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.592134   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592182   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.592205   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592255   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592300   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.592359   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.592424   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.592595   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.592637   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.607823   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.608071   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.608117   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.608391   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I1028 11:37:57.608821   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.609013   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1028 11:37:57.609390   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.609428   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.609555   85575 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-558164"
	I1028 11:37:57.609596   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.609953   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.609994   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.610042   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.610590   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.610674   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.610691   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.611109   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.611122   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.611145   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.611230   85575 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 11:37:57.611385   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.611469   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.612448   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.612488   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.612705   85575 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:37:57.612729   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 11:37:57.612754   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.613701   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1028 11:37:57.614434   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.615428   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.615449   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.616704   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.616728   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I1028 11:37:57.616743   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.617094   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.617275   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.617307   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.617514   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.617536   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.617514   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I1028 11:37:57.617963   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.617971   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I1028 11:37:57.618039   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618078   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I1028 11:37:57.618151   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.618530   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618538   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618627   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.618674   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I1028 11:37:57.619018   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.619037   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.619037   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.619070   85575 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 11:37:57.619170   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.619394   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.619416   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.619439   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.620019   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.620060   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.620066   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.620081   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.620451   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.620649   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.620663   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.620910   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.621556   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.621643   85575 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 11:37:57.621875   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.622311   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.622564   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.622754   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.623155   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.623185   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.623395   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39181
	I1028 11:37:57.623774   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.623907   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.624118   85575 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 11:37:57.624137   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 11:37:57.624167   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.624201   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.624375   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.624392   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.624464   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:57.624472   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:57.624586   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.624596   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.624650   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:57.624668   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:57.624674   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:57.624682   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:57.624689   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:57.624955   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:57.624969   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	W1028 11:37:57.625044   85575 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 11:37:57.625352   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.625413   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.625450   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.626035   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.626066   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.626568   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.627732   85575 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:37:57.628209   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.628459   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.628888   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.628912   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.628955   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.629232   85575 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:37:57.629251   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:37:57.629268   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.629433   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.629625   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.629877   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.630193   85575 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 11:37:57.631478   85575 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:37:57.631496   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 11:37:57.631514   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.634013   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634353   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.634374   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634884   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634954   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.635136   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.635298   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.635447   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.636282   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.636306   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.636488   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.636658   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.636812   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.636864   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1028 11:37:57.637122   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.637369   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.637823   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.637840   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.638233   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.638448   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.639967   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.641767   85575 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 11:37:57.642074   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1028 11:37:57.642398   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.642824   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.642840   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.643039   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 11:37:57.643052   85575 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 11:37:57.643068   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.643199   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.643366   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.645141   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.646575   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.646682   85575 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 11:37:57.646895   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I1028 11:37:57.647180   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.647205   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.647370   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.647463   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.647648   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.647802   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.648068   85575 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 11:37:57.648088   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 11:37:57.648104   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.648250   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.648261   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.647955   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.648594   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.648812   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.650873   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.652020   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I1028 11:37:57.652145   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.652497   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.652619   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I1028 11:37:57.652739   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 11:37:57.652946   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.652963   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.653096   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.653434   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.653462   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.653496   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.653633   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.653645   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.653699   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.653901   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.654070   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.654133   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.655172   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 11:37:57.655372   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1028 11:37:57.655407   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.656848   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.656973   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.657144   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.657561   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.657580   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.657724   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 11:37:57.657993   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.658052   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.658745   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I1028 11:37:57.659232   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.659270   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.659964   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.660466   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.660483   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.660882   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.660959   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 11:37:57.661106   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.661604   85575 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 11:37:57.661719   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I1028 11:37:57.662565   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.662994   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.663225   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.663243   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.663356   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 11:37:57.663448   85575 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:37:57.663471   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 11:37:57.663494   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.663784   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.664615   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.664653   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.664766   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 11:37:57.665819   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 11:37:57.666045   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I1028 11:37:57.666054   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 11:37:57.666068   85575 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 11:37:57.666086   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.666410   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.666626   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.666895   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.666912   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.667118   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1028 11:37:57.667368   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.667588   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.667817   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.667926   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.668049   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.668125   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 11:37:57.668164   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.668178   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.668713   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.668779   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.669034   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.669093   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.669888   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.670891   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 11:37:57.671494   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.671582   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.672087   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.672115   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.672152   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.672341   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.672487   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.672797   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.673070   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.673100   85575 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 11:37:57.673209   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 11:37:57.673231   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 11:37:57.673249   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.673464   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1028 11:37:57.673661   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.674096   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.674598   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 11:37:57.674614   85575 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 11:37:57.674630   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.674714   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.674742   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.675074   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.675257   85575 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 11:37:57.675290   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.676448   85575 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 11:37:57.676471   85575 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 11:37:57.676489   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.676793   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.677190   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.677216   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.677366   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.677428   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.677847   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.678025   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.678043   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.678332   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.678641   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.678710   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.678725   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.678821   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.678914   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.679001   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.679113   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 11:37:57.680246   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:37:57.681459   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:37:57.682177   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.682603   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.682635   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.682739   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.682916   85575 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:37:57.682942   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 11:37:57.682961   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.682917   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.683113   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.683254   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	W1028 11:37:57.684067   85575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42158->192.168.39.31:22: read: connection reset by peer
	I1028 11:37:57.684109   85575 retry.go:31] will retry after 143.190095ms: ssh: handshake failed: read tcp 192.168.39.1:42158->192.168.39.31:22: read: connection reset by peer
	I1028 11:37:57.685684   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I1028 11:37:57.685765   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.686028   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.686046   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.686207   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.686262   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.686398   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.686505   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.686615   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.687337   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.687352   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.687910   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.688144   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.689530   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.691301   85575 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 11:37:57.692727   85575 out.go:177]   - Using image docker.io/busybox:stable
	I1028 11:37:57.693657   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I1028 11:37:57.694005   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.694236   85575 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:37:57.694251   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 11:37:57.694266   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.695402   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.695432   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.695809   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.696062   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.697374   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.697550   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.697809   85575 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:37:57.697822   85575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:37:57.697837   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.697895   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.697908   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.698034   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.698154   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.698236   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.698308   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.700696   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.700996   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.701013   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.701146   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.701267   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.701467   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.701576   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:58.013203   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:37:58.031335   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:37:58.056849   85575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:37:58.056930   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:37:58.094381   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:37:58.097831   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:37:58.100436   85575 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 11:37:58.100459   85575 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 11:37:58.143514   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:37:58.144997   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:37:58.175948   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 11:37:58.175974   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 11:37:58.191313   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 11:37:58.191331   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 11:37:58.236374   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 11:37:58.259145   85575 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:37:58.259164   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 11:37:58.259319   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:37:58.293886   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 11:37:58.293923   85575 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 11:37:58.298064   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 11:37:58.298084   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 11:37:58.317562   85575 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:37:58.317582   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 11:37:58.353243   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 11:37:58.353271   85575 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 11:37:58.361839   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 11:37:58.361865   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 11:37:58.390477   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 11:37:58.390512   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 11:37:58.438189   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:37:58.482073   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 11:37:58.482110   85575 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 11:37:58.494416   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:37:58.494444   85575 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 11:37:58.519960   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:37:58.586375   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 11:37:58.586409   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 11:37:58.611580   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 11:37:58.611604   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 11:37:58.707599   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 11:37:58.707642   85575 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 11:37:58.743699   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:37:58.781401   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 11:37:58.781437   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 11:37:58.793594   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 11:37:58.793626   85575 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 11:37:58.809062   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:37:58.809084   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 11:37:58.918922   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 11:37:58.918970   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 11:37:58.974367   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:37:59.039594   85575 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:37:59.039626   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 11:37:59.212018   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 11:37:59.212045   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 11:37:59.383529   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:37:59.534469   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.521229343s)
	I1028 11:37:59.534535   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.503174377s)
	I1028 11:37:59.534542   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534555   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.534564   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534644   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.534886   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:59.534931   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.534941   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.534949   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534955   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.535048   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.535060   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.535068   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.535079   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.535200   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.535281   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.536627   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:59.536649   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.536659   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.597137   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 11:37:59.597168   85575 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 11:37:59.864910   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 11:37:59.864942   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 11:38:00.039475   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 11:38:00.039500   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 11:38:00.170878   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:38:00.170906   85575 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 11:38:00.455413   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:38:00.831213   85575 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.774321855s)
	I1028 11:38:00.831263   85575 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.774281707s)
	I1028 11:38:00.831293   85575 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:38:00.831978   85575 node_ready.go:35] waiting up to 6m0s for node "addons-558164" to be "Ready" ...
	I1028 11:38:00.835743   85575 node_ready.go:49] node "addons-558164" has status "Ready":"True"
	I1028 11:38:00.835761   85575 node_ready.go:38] duration metric: took 3.761887ms for node "addons-558164" to be "Ready" ...
	I1028 11:38:00.835769   85575 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:38:00.854421   85575 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:01.416145   85575 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-558164" context rescaled to 1 replicas
	I1028 11:38:02.925015   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:02.967619   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.873194397s)
	I1028 11:38:02.967692   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:02.967705   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:02.968048   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:02.968068   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:02.968078   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:02.968087   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:02.968318   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:02.968348   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:02.968368   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:03.041715   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:03.041737   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:03.042099   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:03.042116   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:04.694021   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 11:38:04.694092   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:38:04.697524   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:04.698007   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:38:04.698035   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:04.698253   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:38:04.698452   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:38:04.698625   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:38:04.698772   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:38:05.081314   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 11:38:05.146173   85575 addons.go:234] Setting addon gcp-auth=true in "addons-558164"
	I1028 11:38:05.146243   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:38:05.146663   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:38:05.146702   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:38:05.162751   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I1028 11:38:05.163268   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:38:05.163824   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:38:05.163847   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:38:05.164161   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:38:05.164645   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:38:05.164675   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:38:05.179777   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1028 11:38:05.180296   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:38:05.180917   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:38:05.180943   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:38:05.181310   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:38:05.181568   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:38:05.183220   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:38:05.183459   85575 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 11:38:05.183494   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:38:05.186386   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:05.186788   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:38:05.186819   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:05.187220   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:38:05.187412   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:38:05.187567   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:38:05.187741   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:38:05.392022   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:05.598176   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.500297613s)
	I1028 11:38:05.598250   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598265   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598314   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.454767946s)
	I1028 11:38:05.598360   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598375   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598425   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.453396207s)
	I1028 11:38:05.598456   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598465   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598470   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.362070954s)
	I1028 11:38:05.598490   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598498   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598549   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.339197748s)
	I1028 11:38:05.598585   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598587   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.160370446s)
	I1028 11:38:05.598597   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598608   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598620   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598713   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.07873069s)
	I1028 11:38:05.598730   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598738   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598836   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.855105031s)
	I1028 11:38:05.598851   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598865   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598937   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.598935   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.624527849s)
	I1028 11:38:05.598960   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598967   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599015   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599021   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.215453687s)
	I1028 11:38:05.599029   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599030   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599038   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599046   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	W1028 11:38:05.599053   85575 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:38:05.599057   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599070   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599039   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599095   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599098   85575 retry.go:31] will retry after 195.291749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:38:05.599104   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599166   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599191   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599198   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599205   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599210   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599276   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599306   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599312   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599356   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599363   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599371   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599377   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599424   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599446   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599452   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599459   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599465   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.600502   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.600532   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.600539   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599078   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.600749   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.600862   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.600888   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.600894   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.600901   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.600907   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.602186   85575 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-558164 service yakd-dashboard -n yakd-dashboard
	
	I1028 11:38:05.602482   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.602505   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.602530   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.602537   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.603458   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.603489   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.603495   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604082   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604098   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604108   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604125   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604134   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604145   85575 addons.go:475] Verifying addon metrics-server=true in "addons-558164"
	I1028 11:38:05.604336   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604379   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604390   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604400   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.604410   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.604503   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604518   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604528   85575 addons.go:475] Verifying addon ingress=true in "addons-558164"
	I1028 11:38:05.604715   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604886   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604922   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604934   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604943   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.604951   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.605275   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.605290   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.605303   85575 addons.go:475] Verifying addon registry=true in "addons-558164"
	I1028 11:38:05.605555   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.605704   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.605590   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.606971   85575 out.go:177] * Verifying registry addon...
	I1028 11:38:05.607177   85575 out.go:177] * Verifying ingress addon...
	I1028 11:38:05.609239   85575 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 11:38:05.609433   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 11:38:05.621964   85575 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:38:05.621985   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:05.627377   85575 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 11:38:05.627400   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:05.636349   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.636368   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.636628   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.636650   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.795147   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:38:06.117016   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:06.117808   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:06.792609   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:06.792743   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.075814   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.62033457s)
	I1028 11:38:07.075871   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.075897   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.075912   85575 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.8924203s)
	I1028 11:38:07.076162   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.076206   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.076205   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:07.076221   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.076235   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.076471   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.076502   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.076515   85575 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-558164"
	I1028 11:38:07.077552   85575 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 11:38:07.077560   85575 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 11:38:07.079447   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:38:07.080633   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 11:38:07.080655   85575 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 11:38:07.080666   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 11:38:07.108586   85575 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:38:07.108619   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:07.122207   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.122208   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:07.288223   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 11:38:07.288251   85575 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 11:38:07.392492   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:38:07.392521   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 11:38:07.441492   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:38:07.587089   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:07.615315   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:07.616431   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.860433   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:07.906897   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.111691676s)
	I1028 11:38:07.906967   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.906992   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.907356   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:07.907375   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.907398   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.907414   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.907423   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.907676   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.907694   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.086020   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:08.113726   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:08.114282   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:08.591441   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:08.684556   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:08.687733   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:08.721732   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.280201986s)
	I1028 11:38:08.721790   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:08.721805   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:08.722065   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:08.722087   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.722097   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:08.722106   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:08.722333   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:08.722381   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:08.722399   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.723317   85575 addons.go:475] Verifying addon gcp-auth=true in "addons-558164"
	I1028 11:38:08.724616   85575 out.go:177] * Verifying gcp-auth addon...
	I1028 11:38:08.726521   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 11:38:08.737885   85575 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 11:38:08.737904   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:09.086612   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:09.114140   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:09.114448   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:09.230179   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:09.590166   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:09.614222   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:09.614261   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:09.729827   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:10.085472   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:10.113080   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:10.113431   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:10.230513   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:10.360509   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:10.585454   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:10.613084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:10.614336   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:10.730562   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:11.085756   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:11.113661   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:11.113867   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:11.230310   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:11.585062   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:11.616698   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:11.618902   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:11.730036   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.086110   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:12.113094   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:12.113237   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:12.230126   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.585348   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:12.613768   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:12.615338   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:12.730211   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.860309   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:13.085981   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:13.114249   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:13.114848   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:13.230844   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:13.587311   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:13.615144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:13.615274   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:13.729742   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:14.259622   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:14.266475   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:14.266841   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:14.268935   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:14.587446   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:14.613080   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:14.613164   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:14.729850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:15.085153   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:15.113050   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:15.113139   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:15.230047   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:15.360296   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:15.585811   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:15.612666   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:15.613477   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:15.729459   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:16.084998   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:16.113403   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:16.115121   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:16.230626   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:16.586058   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:16.613183   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:16.613974   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:16.730181   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.086034   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:17.113140   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:17.114432   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:17.230398   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.360766   85575 pod_ready.go:93] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.360789   85575 pod_ready.go:82] duration metric: took 16.506340866s for pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.360798   85575 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.364703   85575 pod_ready.go:93] pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.364727   85575 pod_ready.go:82] duration metric: took 3.921896ms for pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.364740   85575 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.366215   85575 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mfdr7" not found
	I1028 11:38:17.366237   85575 pod_ready.go:82] duration metric: took 1.489435ms for pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace to be "Ready" ...
	E1028 11:38:17.366247   85575 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mfdr7" not found
	I1028 11:38:17.366252   85575 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.369860   85575 pod_ready.go:93] pod "etcd-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.369879   85575 pod_ready.go:82] duration metric: took 3.620568ms for pod "etcd-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.369887   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.373437   85575 pod_ready.go:93] pod "kube-apiserver-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.373452   85575 pod_ready.go:82] duration metric: took 3.560184ms for pod "kube-apiserver-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.373460   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.558227   85575 pod_ready.go:93] pod "kube-controller-manager-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.558255   85575 pod_ready.go:82] duration metric: took 184.789051ms for pod "kube-controller-manager-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.558266   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pbrhz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.584747   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:17.613367   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:17.613912   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:17.732093   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.958474   85575 pod_ready.go:93] pod "kube-proxy-pbrhz" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.958500   85575 pod_ready.go:82] duration metric: took 400.227461ms for pod "kube-proxy-pbrhz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.958512   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.086182   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:18.112638   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:18.113092   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:18.230593   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:18.359139   85575 pod_ready.go:93] pod "kube-scheduler-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:18.359174   85575 pod_ready.go:82] duration metric: took 400.654865ms for pod "kube-scheduler-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.359192   85575 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.584980   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:18.613259   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:18.613911   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:18.730719   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:18.759195   85575 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:18.759218   85575 pod_ready.go:82] duration metric: took 400.017509ms for pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.759227   85575 pod_ready.go:39] duration metric: took 17.923448238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:38:18.759247   85575 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:38:18.759308   85575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:38:18.808892   85575 api_server.go:72] duration metric: took 21.284167287s to wait for apiserver process to appear ...
	I1028 11:38:18.808918   85575 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:38:18.808939   85575 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I1028 11:38:18.812999   85575 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I1028 11:38:18.813930   85575 api_server.go:141] control plane version: v1.31.2
	I1028 11:38:18.813951   85575 api_server.go:131] duration metric: took 5.02705ms to wait for apiserver health ...
	I1028 11:38:18.813959   85575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:38:18.967090   85575 system_pods.go:59] 18 kube-system pods found
	I1028 11:38:18.967123   85575 system_pods.go:61] "amd-gpu-device-plugin-hf6nm" [0741f17c-8923-4320-9291-a8c931291ac0] Running
	I1028 11:38:18.967129   85575 system_pods.go:61] "coredns-7c65d6cfc9-6tgvv" [3f418701-d48a-4380-a42c-d4facbdb4f25] Running
	I1028 11:38:18.967135   85575 system_pods.go:61] "csi-hostpath-attacher-0" [b72fb2c5-aba3-42da-8842-c7c82b4dc7d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 11:38:18.967141   85575 system_pods.go:61] "csi-hostpath-resizer-0" [fbb4ad73-884c-49ad-afce-83f9db13c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 11:38:18.967149   85575 system_pods.go:61] "csi-hostpathplugin-w9lwc" [a47fd224-db98-4ad2-b5d3-3c0215182531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 11:38:18.967156   85575 system_pods.go:61] "etcd-addons-558164" [9cc5084d-707b-43d9-b040-6fd37f3039d1] Running
	I1028 11:38:18.967162   85575 system_pods.go:61] "kube-apiserver-addons-558164" [64eab89a-fbf4-4a89-89a7-fe2d257b2c4a] Running
	I1028 11:38:18.967167   85575 system_pods.go:61] "kube-controller-manager-addons-558164" [fee40a2a-2feb-46d2-8d34-673155f16349] Running
	I1028 11:38:18.967176   85575 system_pods.go:61] "kube-ingress-dns-minikube" [4897117b-12e8-4427-823d-350b57c963e1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1028 11:38:18.967186   85575 system_pods.go:61] "kube-proxy-pbrhz" [1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa] Running
	I1028 11:38:18.967194   85575 system_pods.go:61] "kube-scheduler-addons-558164" [d18c9948-8ede-493d-b11d-548cd422d0a3] Running
	I1028 11:38:18.967200   85575 system_pods.go:61] "metrics-server-84c5f94fbc-xzgq8" [7cebd793-5c4b-4588-bba1-fdb19c5e4fe4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 11:38:18.967206   85575 system_pods.go:61] "nvidia-device-plugin-daemonset-tmgxz" [2222e84c-777d-4de9-a7d0-c0f8307c6df7] Running
	I1028 11:38:18.967213   85575 system_pods.go:61] "registry-66c9cd494c-knm9h" [ef5d7a78-4f98-44f2-8f1f-121ec2384ac3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1028 11:38:18.967220   85575 system_pods.go:61] "registry-proxy-6mfkq" [4c6c611d-0f32-46ff-b60d-db1ab8734769] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1028 11:38:18.967230   85575 system_pods.go:61] "snapshot-controller-56fcc65765-9492j" [69e9e3e8-53e2-4132-a09f-5f8ce0b786a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:18.967238   85575 system_pods.go:61] "snapshot-controller-56fcc65765-brfbf" [26209eed-8f71-4c6e-b5ec-7232a38b8ec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:18.967241   85575 system_pods.go:61] "storage-provisioner" [3918cbcc-ee3e-4c15-8d21-f576b50aec1d] Running
	I1028 11:38:18.967252   85575 system_pods.go:74] duration metric: took 153.286407ms to wait for pod list to return data ...
	I1028 11:38:18.967263   85575 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:38:19.085360   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:19.113464   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:19.114452   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:19.158362   85575 default_sa.go:45] found service account: "default"
	I1028 11:38:19.158385   85575 default_sa.go:55] duration metric: took 191.1116ms for default service account to be created ...
	I1028 11:38:19.158394   85575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:38:19.230583   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:19.365083   85575 system_pods.go:86] 18 kube-system pods found
	I1028 11:38:19.365114   85575 system_pods.go:89] "amd-gpu-device-plugin-hf6nm" [0741f17c-8923-4320-9291-a8c931291ac0] Running
	I1028 11:38:19.365121   85575 system_pods.go:89] "coredns-7c65d6cfc9-6tgvv" [3f418701-d48a-4380-a42c-d4facbdb4f25] Running
	I1028 11:38:19.365128   85575 system_pods.go:89] "csi-hostpath-attacher-0" [b72fb2c5-aba3-42da-8842-c7c82b4dc7d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 11:38:19.365135   85575 system_pods.go:89] "csi-hostpath-resizer-0" [fbb4ad73-884c-49ad-afce-83f9db13c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 11:38:19.365142   85575 system_pods.go:89] "csi-hostpathplugin-w9lwc" [a47fd224-db98-4ad2-b5d3-3c0215182531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 11:38:19.365146   85575 system_pods.go:89] "etcd-addons-558164" [9cc5084d-707b-43d9-b040-6fd37f3039d1] Running
	I1028 11:38:19.365151   85575 system_pods.go:89] "kube-apiserver-addons-558164" [64eab89a-fbf4-4a89-89a7-fe2d257b2c4a] Running
	I1028 11:38:19.365154   85575 system_pods.go:89] "kube-controller-manager-addons-558164" [fee40a2a-2feb-46d2-8d34-673155f16349] Running
	I1028 11:38:19.365162   85575 system_pods.go:89] "kube-ingress-dns-minikube" [4897117b-12e8-4427-823d-350b57c963e1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1028 11:38:19.365165   85575 system_pods.go:89] "kube-proxy-pbrhz" [1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa] Running
	I1028 11:38:19.365170   85575 system_pods.go:89] "kube-scheduler-addons-558164" [d18c9948-8ede-493d-b11d-548cd422d0a3] Running
	I1028 11:38:19.365175   85575 system_pods.go:89] "metrics-server-84c5f94fbc-xzgq8" [7cebd793-5c4b-4588-bba1-fdb19c5e4fe4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 11:38:19.365181   85575 system_pods.go:89] "nvidia-device-plugin-daemonset-tmgxz" [2222e84c-777d-4de9-a7d0-c0f8307c6df7] Running
	I1028 11:38:19.365186   85575 system_pods.go:89] "registry-66c9cd494c-knm9h" [ef5d7a78-4f98-44f2-8f1f-121ec2384ac3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1028 11:38:19.365194   85575 system_pods.go:89] "registry-proxy-6mfkq" [4c6c611d-0f32-46ff-b60d-db1ab8734769] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1028 11:38:19.365202   85575 system_pods.go:89] "snapshot-controller-56fcc65765-9492j" [69e9e3e8-53e2-4132-a09f-5f8ce0b786a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:19.365208   85575 system_pods.go:89] "snapshot-controller-56fcc65765-brfbf" [26209eed-8f71-4c6e-b5ec-7232a38b8ec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:19.365215   85575 system_pods.go:89] "storage-provisioner" [3918cbcc-ee3e-4c15-8d21-f576b50aec1d] Running
	I1028 11:38:19.365224   85575 system_pods.go:126] duration metric: took 206.823166ms to wait for k8s-apps to be running ...
	I1028 11:38:19.365232   85575 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:38:19.365277   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:38:19.393954   85575 system_svc.go:56] duration metric: took 28.710964ms WaitForService to wait for kubelet
	I1028 11:38:19.393981   85575 kubeadm.go:582] duration metric: took 21.869263514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:38:19.394001   85575 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:38:19.560372   85575 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:38:19.560400   85575 node_conditions.go:123] node cpu capacity is 2
	I1028 11:38:19.560414   85575 node_conditions.go:105] duration metric: took 166.408086ms to run NodePressure ...
	I1028 11:38:19.560427   85575 start.go:241] waiting for startup goroutines ...
	I1028 11:38:19.584849   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:19.613852   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:19.614237   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:19.729865   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:20.084689   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:20.113998   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:20.114935   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:20.231281   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:20.585533   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:20.613436   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:20.614423   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:20.730286   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:21.085246   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:21.112412   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:21.113224   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:21.229897   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:21.679304   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:21.679408   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:21.679905   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:21.777947   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:22.085180   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:22.113736   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:22.113870   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:22.230498   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:22.586406   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:22.613742   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:22.614445   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:22.729751   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:23.085546   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:23.112538   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:23.112783   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:23.229569   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:23.586050   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:23.612683   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:23.614748   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:23.730186   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:24.084743   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:24.113652   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:24.113834   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:24.230085   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:24.585318   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:24.613658   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:24.614091   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:24.730503   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:25.085173   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:25.113260   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:25.113901   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:25.229699   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:25.584826   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:25.615072   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:25.615085   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:25.729396   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:26.085388   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:26.113419   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:26.113629   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:26.230040   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:26.585576   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:26.613210   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:26.613885   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:26.730172   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:27.085069   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:27.114422   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:27.114524   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:27.229967   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:27.585568   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:27.613502   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:27.614635   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:27.730823   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:28.084609   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:28.113850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:28.114026   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:28.229339   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:28.585076   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:28.616112   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:28.618045   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:28.731113   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:29.085144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:29.114101   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:29.114228   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:29.230238   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:29.585850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:29.613404   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:29.614225   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:29.729930   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:30.084713   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:30.113144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:30.114070   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:30.230771   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:30.585684   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:30.613518   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:30.613900   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:30.730126   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:31.085936   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:31.113909   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:31.114351   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:31.229732   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:31.585822   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:31.614147   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:31.616194   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:31.730201   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:32.086291   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:32.113641   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:32.113763   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:32.230723   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:32.585705   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:32.614112   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:32.614999   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:32.729907   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:33.084886   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:33.114082   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:33.114174   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:33.229832   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:33.586791   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:33.614823   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:33.617546   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:33.730241   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:34.085554   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:34.118098   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:34.118220   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:34.229478   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:34.585604   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:34.614084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:34.614330   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:34.730175   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:35.085063   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:35.113633   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:35.113752   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:35.231151   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:35.585521   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:35.613489   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:35.613798   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:35.730464   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:36.086575   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:36.113459   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:36.114740   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:36.230299   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:36.585743   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:36.614032   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:36.615395   85575 kapi.go:107] duration metric: took 31.005961393s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 11:38:36.730081   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:37.085275   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:37.112747   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:37.230225   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:37.585637   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:37.613947   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:37.730260   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:38.085359   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:38.112753   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:38.230003   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:38.585834   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:38.613750   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:38.729799   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:39.085536   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:39.113673   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:39.229931   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:39.585184   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:39.614201   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:39.730223   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.085906   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:40.114866   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:40.229865   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.873892   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.874177   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:40.875180   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.086400   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.112913   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:41.230418   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:41.585576   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.613658   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:41.730797   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:42.085061   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:42.113856   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:42.230484   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:42.585296   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:42.613637   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:42.729839   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:43.084882   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:43.113931   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:43.230337   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:43.586064   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:43.613839   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:43.730687   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:44.085633   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:44.112960   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:44.230445   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:44.585776   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:44.613368   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:44.730496   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:45.085523   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:45.113188   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:45.230240   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:45.585390   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:45.612763   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:45.731143   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:46.086325   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:46.113765   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:46.229981   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:46.585075   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:46.613823   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:46.730384   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:47.085382   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:47.113342   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:47.230531   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:47.584448   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:47.614056   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:47.730749   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:48.085519   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:48.112985   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:48.230084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:48.585932   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:48.613428   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:48.730432   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:49.088379   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:49.113764   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:49.230242   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:49.586377   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:49.613722   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:49.729549   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:50.089248   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:50.189467   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:50.230269   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:50.585824   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:50.613677   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:50.730308   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:51.086355   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:51.114785   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:51.232032   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:51.586014   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:51.617015   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:51.731910   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:52.086234   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:52.113016   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:52.230849   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:52.585554   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:52.613696   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:52.729540   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:53.085654   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:53.113788   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:53.230520   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:53.642690   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:53.644408   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:53.730695   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:54.087230   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:54.114213   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:54.230667   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:54.585251   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:54.613137   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:54.730460   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:55.085868   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:55.113191   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:55.229527   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:55.586438   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:55.616076   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:55.729939   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:56.084590   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:56.113232   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:56.230818   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:56.584595   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:56.613124   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:56.730337   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:57.088530   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:57.113752   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:57.230143   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:57.586664   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:57.613413   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:57.730866   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:58.084916   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:58.113406   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:58.232615   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:58.585760   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:58.613309   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:58.729580   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:59.098746   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:59.114996   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:59.231037   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:59.587067   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:59.613694   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:59.734859   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:00.273804   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:00.273955   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:00.274636   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:00.587002   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:00.613267   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:00.729293   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:01.085430   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:01.112737   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:01.230461   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:01.587603   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:01.615336   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:01.734928   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:02.086235   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:02.112469   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:02.230255   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:02.586244   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:02.614353   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:02.734786   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:03.085193   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:03.116857   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:03.230391   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:03.584785   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:03.613359   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:03.729703   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:04.086018   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:04.113473   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:04.230638   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:04.586103   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:04.613351   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:04.729728   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:05.085472   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:05.112483   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:05.230169   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:05.585544   85575 kapi.go:107] duration metric: took 58.504874375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 11:39:05.613323   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:05.737580   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:06.113251   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:06.230008   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:06.614493   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:06.729620   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:07.113584   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:07.230092   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:07.613279   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:07.729521   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:08.113416   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:08.229991   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:08.613406   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:08.729888   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:09.113742   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:09.229858   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:09.613518   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:09.732442   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:10.113704   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:10.231148   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:10.614166   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:10.731000   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:11.115419   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:11.229795   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:11.655748   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:11.871789   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:12.114029   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:12.230901   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:12.613419   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:12.730243   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:13.113459   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:13.229787   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:13.613438   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:13.730541   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:14.113822   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:14.232424   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:14.613639   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:14.730382   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:15.114248   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:15.230537   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:15.613599   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:15.729603   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:16.115397   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:16.230753   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:16.615424   85575 kapi.go:107] duration metric: took 1m11.006180181s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 11:39:16.729986   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:17.233198   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:17.730667   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:18.230155   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:18.730481   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:19.229836   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:19.730390   85575 kapi.go:107] duration metric: took 1m11.003864421s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 11:39:19.732115   85575 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-558164 cluster.
	I1028 11:39:19.733669   85575 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 11:39:19.734832   85575 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 11:39:19.736138   85575 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner-rancher, cloud-spanner, ingress-dns, yakd, storage-provisioner, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1028 11:39:19.737342   85575 addons.go:510] duration metric: took 1m22.212581031s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin storage-provisioner-rancher cloud-spanner ingress-dns yakd storage-provisioner metrics-server inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1028 11:39:19.737385   85575 start.go:246] waiting for cluster config update ...
	I1028 11:39:19.737404   85575 start.go:255] writing updated cluster config ...
	I1028 11:39:19.737663   85575 ssh_runner.go:195] Run: rm -f paused
	I1028 11:39:19.786000   85575 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:39:19.787416   85575 out.go:177] * Done! kubectl is now configured to use "addons-558164" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:42:38 addons-558164 crio[664]: time="2024-10-28 11:42:38.972622498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115758972598385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecde57ae-33cc-42a0-b935-7e7060afe2b3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:38 addons-558164 crio[664]: time="2024-10-28 11:42:38.973126609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1299f334-f1d6-4723-9a21-dffdd747f68e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:38 addons-558164 crio[664]: time="2024-10-28 11:42:38.973182097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1299f334-f1d6-4723-9a21-dffdd747f68e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:38 addons-558164 crio[664]: time="2024-10-28 11:42:38.973820178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-99ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eab6054e2296eb565ed86f51c5289ef2735cdd68b4203067e8c410da5e5ee,PodSandboxId:b5e5a37f60de740fbe9818a7c65acc4fecc14d25793933026595a3f062a52258,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730115555832780675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-vgmwb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed522801-cd94-45f9-bc2d-dc78c62642a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29c61222d2c59f3efb075e3ba4a6d9b10f12d2c1a60692d4243882b54d9821f8,PodSandboxId:6fb52a436f921bab57664de3a50ca7e63ca3f372a51e32fff223f26443606f7d,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730115532687555103,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wssht,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d69567c0-0143-4ab1-a375-e18a9673e267,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c798516f2dbc1b9161fd51b86b08c481222238f88db78fb3f21164eeb1102f71,PodSandboxId:4cc0e7cfe79eff9abb10680c58509140e1fe6e164c693e907a3136327aba1c34,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730115531754306830,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xdgf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9097f4-97e6-4ff8-a34d-e0a7e73dd6a8,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304be093ffed6f554aecab99521819b77f7c255b6cd28e9f7297468f02ead964,PodSandboxId:58e7786adf1021607904916364b1d313fb87d33a3b5eca0e2e9ccf140494afae,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730115508558018631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4897117b-12e8-4427-823d-350b57c963e1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf11
6763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f64
1318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca3
2cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe35
1350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8
b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=1299f334-f1d6-4723-9a21-dffdd747f68e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.014639300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f3068e8-cd50-402e-8ee1-c55f9fe30c36 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.014943557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f3068e8-cd50-402e-8ee1-c55f9fe30c36 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.016370339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddc3c513-2265-45a1-a420-6be1b5971850 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.018131381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115759018099976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddc3c513-2265-45a1-a420-6be1b5971850 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.018817079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cff3de83-9bf0-4ace-97fc-c2716e561c82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.018898395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cff3de83-9bf0-4ace-97fc-c2716e561c82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.019250725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-99ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eab6054e2296eb565ed86f51c5289ef2735cdd68b4203067e8c410da5e5ee,PodSandboxId:b5e5a37f60de740fbe9818a7c65acc4fecc14d25793933026595a3f062a52258,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730115555832780675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-vgmwb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed522801-cd94-45f9-bc2d-dc78c62642a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29c61222d2c59f3efb075e3ba4a6d9b10f12d2c1a60692d4243882b54d9821f8,PodSandboxId:6fb52a436f921bab57664de3a50ca7e63ca3f372a51e32fff223f26443606f7d,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730115532687555103,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wssht,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d69567c0-0143-4ab1-a375-e18a9673e267,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c798516f2dbc1b9161fd51b86b08c481222238f88db78fb3f21164eeb1102f71,PodSandboxId:4cc0e7cfe79eff9abb10680c58509140e1fe6e164c693e907a3136327aba1c34,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730115531754306830,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xdgf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9097f4-97e6-4ff8-a34d-e0a7e73dd6a8,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304be093ffed6f554aecab99521819b77f7c255b6cd28e9f7297468f02ead964,PodSandboxId:58e7786adf1021607904916364b1d313fb87d33a3b5eca0e2e9ccf140494afae,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730115508558018631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4897117b-12e8-4427-823d-350b57c963e1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf11
6763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f64
1318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca3
2cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe35
1350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8
b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=cff3de83-9bf0-4ace-97fc-c2716e561c82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.059172751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c290ca0-33f6-4e27-8216-714ec3d2d97d name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.059267876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c290ca0-33f6-4e27-8216-714ec3d2d97d name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.060815478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad3604b5-406f-4183-bc4d-5433c3256dbd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.062608525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115759062577163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad3604b5-406f-4183-bc4d-5433c3256dbd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.063253752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a50f8d76-25a5-414a-bff4-fdc8aa1728b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.063368890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a50f8d76-25a5-414a-bff4-fdc8aa1728b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.063876288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-99ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eab6054e2296eb565ed86f51c5289ef2735cdd68b4203067e8c410da5e5ee,PodSandboxId:b5e5a37f60de740fbe9818a7c65acc4fecc14d25793933026595a3f062a52258,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730115555832780675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-vgmwb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed522801-cd94-45f9-bc2d-dc78c62642a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29c61222d2c59f3efb075e3ba4a6d9b10f12d2c1a60692d4243882b54d9821f8,PodSandboxId:6fb52a436f921bab57664de3a50ca7e63ca3f372a51e32fff223f26443606f7d,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730115532687555103,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wssht,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d69567c0-0143-4ab1-a375-e18a9673e267,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c798516f2dbc1b9161fd51b86b08c481222238f88db78fb3f21164eeb1102f71,PodSandboxId:4cc0e7cfe79eff9abb10680c58509140e1fe6e164c693e907a3136327aba1c34,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730115531754306830,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xdgf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9097f4-97e6-4ff8-a34d-e0a7e73dd6a8,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304be093ffed6f554aecab99521819b77f7c255b6cd28e9f7297468f02ead964,PodSandboxId:58e7786adf1021607904916364b1d313fb87d33a3b5eca0e2e9ccf140494afae,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730115508558018631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4897117b-12e8-4427-823d-350b57c963e1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf11
6763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f64
1318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca3
2cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe35
1350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8
b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=a50f8d76-25a5-414a-bff4-fdc8aa1728b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.095257301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39b146ee-6a6e-4294-917c-aba34054c986 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.095317622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39b146ee-6a6e-4294-917c-aba34054c986 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.096715171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e9f90eb-6415-4500-b57b-8ad41366912e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.097954574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115759097933420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e9f90eb-6415-4500-b57b-8ad41366912e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.098388915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5a2727e-cbc9-4e0a-a96c-3d732b5a0bfe name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.098454312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5a2727e-cbc9-4e0a-a96c-3d732b5a0bfe name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:42:39 addons-558164 crio[664]: time="2024-10-28 11:42:39.098819208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-99ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eab6054e2296eb565ed86f51c5289ef2735cdd68b4203067e8c410da5e5ee,PodSandboxId:b5e5a37f60de740fbe9818a7c65acc4fecc14d25793933026595a3f062a52258,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730115555832780675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-vgmwb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed522801-cd94-45f9-bc2d-dc78c62642a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29c61222d2c59f3efb075e3ba4a6d9b10f12d2c1a60692d4243882b54d9821f8,PodSandboxId:6fb52a436f921bab57664de3a50ca7e63ca3f372a51e32fff223f26443606f7d,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730115532687555103,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wssht,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d69567c0-0143-4ab1-a375-e18a9673e267,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c798516f2dbc1b9161fd51b86b08c481222238f88db78fb3f21164eeb1102f71,PodSandboxId:4cc0e7cfe79eff9abb10680c58509140e1fe6e164c693e907a3136327aba1c34,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730115531754306830,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xdgf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9097f4-97e6-4ff8-a34d-e0a7e73dd6a8,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304be093ffed6f554aecab99521819b77f7c255b6cd28e9f7297468f02ead964,PodSandboxId:58e7786adf1021607904916364b1d313fb87d33a3b5eca0e2e9ccf140494afae,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730115508558018631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4897117b-12e8-4427-823d-350b57c963e1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf11
6763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f64
1318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca3
2cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe35
1350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8
b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=f5a2727e-cbc9-4e0a-a96c-3d732b5a0bfe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fdfeaf60f156b       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   62c9b018b548e       nginx
	437168bf2c657       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   802b7b16db1da       busybox
	b15eab6054e22       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   b5e5a37f60de7       ingress-nginx-controller-5f85ff4588-vgmwb
	29c61222d2c59       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   6fb52a436f921       ingress-nginx-admission-patch-wssht
	c798516f2dbc1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   4cc0e7cfe79ef       ingress-nginx-admission-create-xdgf6
	6181d8b4d6c1f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   0c6207db6229a       metrics-server-84c5f94fbc-xzgq8
	304be093ffed6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   58e7786adf102       kube-ingress-dns-minikube
	e30ed896c3ed4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   72d8edbce8132       amd-gpu-device-plugin-hf6nm
	b84352c4fbc20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   34d3ff3415a5f       storage-provisioner
	f39e4e545f8d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   e24847365280f       coredns-7c65d6cfc9-6tgvv
	53485b60a86d8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   5615740d67274       kube-proxy-pbrhz
	2f17b035df516       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   5ea903e5aa2f8       etcd-addons-558164
	c04614c73051f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago       Running             kube-apiserver            0                   7d82d01a9fd32       kube-apiserver-addons-558164
	942b5fe351350       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago       Running             kube-scheduler            0                   c361b789c8f62       kube-scheduler-addons-558164
	b449d05a1cada       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago       Running             kube-controller-manager   0                   40ebadee68961       kube-controller-manager-addons-558164
	
	
	==> coredns [f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008] <==
	[INFO] 10.244.0.8:58813 - 11240 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00010131s
	[INFO] 10.244.0.8:58813 - 28728 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000165114s
	[INFO] 10.244.0.8:58813 - 6467 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000406564s
	[INFO] 10.244.0.8:58813 - 46648 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00015767s
	[INFO] 10.244.0.8:58813 - 11342 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076731s
	[INFO] 10.244.0.8:58813 - 5450 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000134648s
	[INFO] 10.244.0.8:58813 - 56878 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000078747s
	[INFO] 10.244.0.8:39565 - 33253 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096778s
	[INFO] 10.244.0.8:39565 - 32952 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000492s
	[INFO] 10.244.0.8:38492 - 40987 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115966s
	[INFO] 10.244.0.8:38492 - 40745 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041991s
	[INFO] 10.244.0.8:51820 - 41954 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051045s
	[INFO] 10.244.0.8:51820 - 41490 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061655s
	[INFO] 10.244.0.8:55098 - 26483 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007605s
	[INFO] 10.244.0.8:55098 - 26296 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050836s
	[INFO] 10.244.0.23:52611 - 10043 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000419627s
	[INFO] 10.244.0.23:60547 - 24930 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156214s
	[INFO] 10.244.0.23:50997 - 56774 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001586799s
	[INFO] 10.244.0.23:33884 - 16720 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099937s
	[INFO] 10.244.0.23:58001 - 47494 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127867s
	[INFO] 10.244.0.23:60428 - 20480 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070921s
	[INFO] 10.244.0.23:41633 - 37923 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003524468s
	[INFO] 10.244.0.23:47354 - 30819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.003796607s
	[INFO] 10.244.0.27:44941 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000418189s
	[INFO] 10.244.0.27:51530 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130743s
	
	
	==> describe nodes <==
	Name:               addons-558164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-558164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=addons-558164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_37_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-558164
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:37:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-558164
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:42:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:40:25 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:40:25 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:40:25 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:40:25 +0000   Mon, 28 Oct 2024 11:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    addons-558164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 de5182324cb444329f5f4628a3a73a2c
	  System UUID:                de518232-4cb4-4432-9f5f-4628a3a73a2c
	  Boot ID:                    1a41ae33-de4f-4d75-8f59-2e9cade0ce3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  default                     hello-world-app-55bf9c44b4-vm8nt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-vgmwb    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m34s
	  kube-system                 amd-gpu-device-plugin-hf6nm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-7c65d6cfc9-6tgvv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m42s
	  kube-system                 etcd-addons-558164                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m47s
	  kube-system                 kube-apiserver-addons-558164                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-addons-558164        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-pbrhz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-558164                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 metrics-server-84c5f94fbc-xzgq8              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s  kubelet          Node addons-558164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s  kubelet          Node addons-558164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s  kubelet          Node addons-558164 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s  kubelet          Node addons-558164 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node addons-558164 event: Registered Node addons-558164 in Controller
	
	
	==> dmesg <==
	[  +6.481256] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.075285] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.063590] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.230213] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[Oct28 11:38] kauditd_printk_skb: 134 callbacks suppressed
	[  +5.030015] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.036925] kauditd_printk_skb: 63 callbacks suppressed
	[  +9.307677] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.384346] kauditd_printk_skb: 9 callbacks suppressed
	[ +12.130528] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.307274] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.433562] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:39] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.174218] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.123349] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.156058] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.090812] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.034238] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.390912] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:40] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.107545] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.011790] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.954368] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.949438] kauditd_printk_skb: 7 callbacks suppressed
	[Oct28 11:42] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe] <==
	{"level":"info","ts":"2024-10-28T11:39:11.845556Z","caller":"traceutil/trace.go:171","msg":"trace[953549410] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1123; }","duration":"198.440505ms","start":"2024-10-28T11:39:11.647103Z","end":"2024-10-28T11:39:11.845543Z","steps":["trace[953549410] 'read index received'  (duration: 194.570432ms)","trace[953549410] 'applied index is now lower than readState.Index'  (duration: 3.869336ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:39:11.845643Z","caller":"traceutil/trace.go:171","msg":"trace[2012828142] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"209.815285ms","start":"2024-10-28T11:39:11.635822Z","end":"2024-10-28T11:39:11.845637Z","steps":["trace[2012828142] 'process raft request'  (duration: 209.447548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:39:11.845788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.669298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:39:11.845808Z","caller":"traceutil/trace.go:171","msg":"trace[1317541856] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1097; }","duration":"198.703892ms","start":"2024-10-28T11:39:11.647099Z","end":"2024-10-28T11:39:11.845803Z","steps":["trace[1317541856] 'agreement among raft nodes before linearized reading'  (duration: 198.655107ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:39:11.847156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.863256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:39:11.847196Z","caller":"traceutil/trace.go:171","msg":"trace[866272014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"140.908025ms","start":"2024-10-28T11:39:11.706281Z","end":"2024-10-28T11:39:11.847189Z","steps":["trace[866272014] 'agreement among raft nodes before linearized reading'  (duration: 140.810248ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:40:17.353983Z","caller":"traceutil/trace.go:171","msg":"trace[755972929] transaction","detail":"{read_only:false; response_revision:1573; number_of_response:1; }","duration":"380.322012ms","start":"2024-10-28T11:40:16.973628Z","end":"2024-10-28T11:40:17.353950Z","steps":["trace[755972929] 'process raft request'  (duration: 380.205502ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.354310Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:16.973610Z","time spent":"380.539386ms","remote":"127.0.0.1:48170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1539 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T11:40:17.354858Z","caller":"traceutil/trace.go:171","msg":"trace[235816913] linearizableReadLoop","detail":"{readStateIndex:1622; appliedIndex:1622; }","duration":"312.126916ms","start":"2024-10-28T11:40:17.042695Z","end":"2024-10-28T11:40:17.354822Z","steps":["trace[235816913] 'read index received'  (duration: 312.123798ms)","trace[235816913] 'applied index is now lower than readState.Index'  (duration: 2.614µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:40:17.354945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.23341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.354977Z","caller":"traceutil/trace.go:171","msg":"trace[698129893] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1573; }","duration":"312.275722ms","start":"2024-10-28T11:40:17.042691Z","end":"2024-10-28T11:40:17.354967Z","steps":["trace[698129893] 'agreement among raft nodes before linearized reading'  (duration: 312.216455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.354999Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:17.042657Z","time spent":"312.337392ms","remote":"127.0.0.1:48414","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":28,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T11:40:17.394897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.075996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.394955Z","caller":"traceutil/trace.go:171","msg":"trace[911482696] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1574; }","duration":"348.144623ms","start":"2024-10-28T11:40:17.046800Z","end":"2024-10-28T11:40:17.394944Z","steps":["trace[911482696] 'agreement among raft nodes before linearized reading'  (duration: 348.02911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.394983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:17.046770Z","time spent":"348.207109ms","remote":"127.0.0.1:47912","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T11:40:17.395287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.350717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395330Z","caller":"traceutil/trace.go:171","msg":"trace[1890499628] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1574; }","duration":"106.389502ms","start":"2024-10-28T11:40:17.288928Z","end":"2024-10-28T11:40:17.395317Z","steps":["trace[1890499628] 'agreement among raft nodes before linearized reading'  (duration: 106.339203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.834672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-28T11:40:17.395494Z","caller":"traceutil/trace.go:171","msg":"trace[1130799877] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1574; }","duration":"110.867059ms","start":"2024-10-28T11:40:17.284621Z","end":"2024-10-28T11:40:17.395488Z","steps":["trace[1130799877] 'agreement among raft nodes before linearized reading'  (duration: 110.781381ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.039333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395636Z","caller":"traceutil/trace.go:171","msg":"trace[133035816] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:1574; }","duration":"215.062886ms","start":"2024-10-28T11:40:17.180569Z","end":"2024-10-28T11:40:17.395632Z","steps":["trace[133035816] 'agreement among raft nodes before linearized reading'  (duration: 215.029517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.85731ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395814Z","caller":"traceutil/trace.go:171","msg":"trace[446218127] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1574; }","duration":"277.887349ms","start":"2024-10-28T11:40:17.117919Z","end":"2024-10-28T11:40:17.395806Z","steps":["trace[446218127] 'agreement among raft nodes before linearized reading'  (duration: 277.848974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:48.898864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.184748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-health-monitor-controller-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:48.898923Z","caller":"traceutil/trace.go:171","msg":"trace[610907122] range","detail":"{range_begin:/registry/roles/kube-system/external-health-monitor-controller-cfg; range_end:; response_count:0; response_revision:1791; }","duration":"143.280037ms","start":"2024-10-28T11:40:48.755630Z","end":"2024-10-28T11:40:48.898911Z","steps":["trace[610907122] 'range keys from in-memory index tree'  (duration: 143.097356ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:42:39 up 5 min,  0 users,  load average: 0.44, 0.79, 0.42
	Linux addons-558164 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce] <==
	 > logger="UnhandledError"
	E1028 11:39:42.144070       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.14:443: connect: connection refused" logger="UnhandledError"
	E1028 11:39:42.146581       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.14:443: connect: connection refused" logger="UnhandledError"
	I1028 11:39:42.186647       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1028 11:39:43.539170       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.254.124"}
	I1028 11:40:07.833472       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 11:40:08.959934       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 11:40:13.348330       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 11:40:13.548451       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.190.75"}
	E1028 11:40:17.396882       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 11:40:24.848706       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 11:40:46.323558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.323595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.345910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.345941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.367613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.367664       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.370051       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.370157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.423396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.423791       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 11:40:47.369077       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 11:40:47.424984       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 11:40:47.537860       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:42:38.078579       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.92.224"}
	
	
	==> kube-controller-manager [b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71] <==
	E1028 11:41:04.406187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:09.786954       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:09.787065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:19.203538       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:19.203659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:19.929430       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:19.929544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:28.055901       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:28.056035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:46.032770       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:46.032827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:50.853839       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:50.853985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:41:52.753429       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:41:52.753503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:42:16.551347       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:42:16.551664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:42:18.491819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:42:18.491865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:42:33.777182       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:42:33.777236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 11:42:37.934369       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.175067ms"
	I1028 11:42:37.952294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.851357ms"
	I1028 11:42:37.952566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.92µs"
	I1028 11:42:37.975221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="113.996µs"
	
	
	==> kube-proxy [53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:37:59.431701       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:37:59.458408       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.31"]
	E1028 11:37:59.458497       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:37:59.614462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:37:59.614507       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:37:59.614540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:37:59.623855       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:37:59.624428       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:37:59.624468       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:37:59.630875       1 config.go:199] "Starting service config controller"
	I1028 11:37:59.630889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:37:59.630909       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:37:59.630912       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:37:59.632694       1 config.go:328] "Starting node config controller"
	I1028 11:37:59.632755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:37:59.731909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:37:59.731967       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:37:59.733338       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf] <==
	E1028 11:37:49.953659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:49.953690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:37:49.953714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1028 11:37:49.953592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.774910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 11:37:50.775039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.807361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:37:50.808072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.809333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:37:50.809382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.827901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:50.828119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.032659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:37:51.032782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.057803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:37:51.057852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.071503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:51.071547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.142409       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:37:51.143365       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 11:37:51.164072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:37:51.164125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.177256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:51.177473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:37:53.848036       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:42:32 addons-558164 kubelet[1218]: E1028 11:42:32.769621    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115752769365170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:42:32 addons-558164 kubelet[1218]: E1028 11:42:32.769943    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115752769365170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.946614    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69e9e3e8-53e2-4132-a09f-5f8ce0b786a6" containerName="volume-snapshot-controller"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947040    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-provisioner"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947132    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="liveness-probe"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947177    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-snapshotter"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947245    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b72fb2c5-aba3-42da-8842-c7c82b4dc7d4" containerName="csi-attacher"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947281    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-external-health-monitor-controller"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947351    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="node-driver-registrar"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947384    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26209eed-8f71-4c6e-b5ec-7232a38b8ec5" containerName="volume-snapshot-controller"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947469    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="hostpath"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947502    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f03a2b99-c5be-4b6a-b3a5-d89be3baae6d" containerName="task-pv-container"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: E1028 11:42:37.947577    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbb4ad73-884c-49ad-afce-83f9db13c7bd" containerName="csi-resizer"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.947700    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-external-health-monitor-controller"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.947785    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03a2b99-c5be-4b6a-b3a5-d89be3baae6d" containerName="task-pv-container"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.947828    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="26209eed-8f71-4c6e-b5ec-7232a38b8ec5" containerName="volume-snapshot-controller"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.947933    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-provisioner"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.947964    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72fb2c5-aba3-42da-8842-c7c82b4dc7d4" containerName="csi-attacher"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948032    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="node-driver-registrar"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948065    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="hostpath"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948135    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb4ad73-884c-49ad-afce-83f9db13c7bd" containerName="csi-resizer"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948165    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="liveness-probe"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948233    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47fd224-db98-4ad2-b5d3-3c0215182531" containerName="csi-snapshotter"
	Oct 28 11:42:37 addons-558164 kubelet[1218]: I1028 11:42:37.948265    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e9e3e8-53e2-4132-a09f-5f8ce0b786a6" containerName="volume-snapshot-controller"
	Oct 28 11:42:38 addons-558164 kubelet[1218]: I1028 11:42:38.066984    1218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77p4\" (UniqueName: \"kubernetes.io/projected/95365b42-ab21-4c85-9c67-ac097572e19c-kube-api-access-n77p4\") pod \"hello-world-app-55bf9c44b4-vm8nt\" (UID: \"95365b42-ab21-4c85-9c67-ac097572e19c\") " pod="default/hello-world-app-55bf9c44b4-vm8nt"
	
	
	==> storage-provisioner [b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a] <==
	I1028 11:38:04.083258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:38:04.100891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:38:04.100982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:38:04.120071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:38:04.120343       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe!
	I1028 11:38:04.122265       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e17b3df1-79ea-4484-bd28-570c3e2acea3", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe became leader
	I1028 11:38:04.220935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-558164 -n addons-558164
helpers_test.go:261: (dbg) Run:  kubectl --context addons-558164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-vm8nt ingress-nginx-admission-create-xdgf6 ingress-nginx-admission-patch-wssht
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-558164 describe pod hello-world-app-55bf9c44b4-vm8nt ingress-nginx-admission-create-xdgf6 ingress-nginx-admission-patch-wssht
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-558164 describe pod hello-world-app-55bf9c44b4-vm8nt ingress-nginx-admission-create-xdgf6 ingress-nginx-admission-patch-wssht: exit status 1 (68.020652ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-vm8nt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-558164/192.168.39.31
	Start Time:       Mon, 28 Oct 2024 11:42:37 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n77p4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n77p4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-vm8nt to addons-558164
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xdgf6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wssht" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-558164 describe pod hello-world-app-55bf9c44b4-vm8nt ingress-nginx-admission-create-xdgf6 ingress-nginx-admission-patch-wssht: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable ingress-dns --alsologtostderr -v=1: (1.076857036s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable ingress --alsologtostderr -v=1: (7.699200152s)
--- FAIL: TestAddons/parallel/Ingress (155.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.271072ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-xzgq8" [7cebd793-5c4b-4588-bba1-fdb19c5e4fe4] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003702054s
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (64.734313ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m6.803477004s

                                                
                                                
** /stderr **
I1028 11:40:05.805511   84965 retry.go:31] will retry after 2.645487499s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (62.391545ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m9.512534557s

                                                
                                                
** /stderr **
I1028 11:40:08.514534   84965 retry.go:31] will retry after 3.646105828s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (67.585249ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m13.227151343s

                                                
                                                
** /stderr **
I1028 11:40:12.229144   84965 retry.go:31] will retry after 8.456169107s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (62.09595ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m21.745651295s

                                                
                                                
** /stderr **
I1028 11:40:20.748264   84965 retry.go:31] will retry after 8.163679154s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (61.816955ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m29.972216754s

                                                
                                                
** /stderr **
I1028 11:40:28.974642   84965 retry.go:31] will retry after 16.188489798s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (64.095058ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 2m46.225782798s

                                                
                                                
** /stderr **
I1028 11:40:45.228150   84965 retry.go:31] will retry after 30.459409458s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (63.400305ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 3m16.749092704s

                                                
                                                
** /stderr **
I1028 11:41:15.751251   84965 retry.go:31] will retry after 47.076565481s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (64.023755ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 4m3.890756218s

                                                
                                                
** /stderr **
I1028 11:42:02.892884   84965 retry.go:31] will retry after 1m2.404754908s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (61.615189ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 5m6.361449538s

                                                
                                                
** /stderr **
I1028 11:43:05.363827   84965 retry.go:31] will retry after 50.633594368s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (61.125252ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 5m57.061922745s

                                                
                                                
** /stderr **
I1028 11:43:56.063884   84965 retry.go:31] will retry after 1m15.220363763s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (63.382692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 7m12.346020874s

                                                
                                                
** /stderr **
I1028 11:45:11.348314   84965 retry.go:31] will retry after 52.071406537s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-558164 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-558164 top pods -n kube-system: exit status 1 (65.558077ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hf6nm, age: 8m4.484389382s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-558164 -n addons-558164
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 logs -n 25: (1.069404316s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-618409                                                                     | download-only-618409 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| delete  | -p download-only-165595                                                                     | download-only-165595 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-029933 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | binary-mirror-029933                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41615                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-029933                                                                     | binary-mirror-029933 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| addons  | enable dashboard -p                                                                         | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | addons-558164                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | addons-558164                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-558164 --wait=true                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | -p addons-558164                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-558164 ip                                                                            | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:39 UTC | 28 Oct 24 11:40 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-558164 ssh cat                                                                       | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | /opt/local-path-provisioner/pvc-ebacc6ce-c961-47ab-93f4-2185834202e1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-558164 ssh curl -s                                                                   | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-558164 addons                                                                        | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:40 UTC | 28 Oct 24 11:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-558164 ip                                                                            | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-558164 addons disable                                                                | addons-558164        | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:37:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:37:13.141222   85575 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:37:13.141449   85575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:13.141469   85575 out.go:358] Setting ErrFile to fd 2...
	I1028 11:37:13.141476   85575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:13.141958   85575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:37:13.142565   85575 out.go:352] Setting JSON to false
	I1028 11:37:13.143368   85575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4783,"bootTime":1730110650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:37:13.143459   85575 start.go:139] virtualization: kvm guest
	I1028 11:37:13.145363   85575 out.go:177] * [addons-558164] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:37:13.146623   85575 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:37:13.146625   85575 notify.go:220] Checking for updates...
	I1028 11:37:13.148035   85575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:37:13.149318   85575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:37:13.150556   85575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:13.151784   85575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:37:13.153033   85575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:37:13.154683   85575 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:37:13.186107   85575 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:37:13.187318   85575 start.go:297] selected driver: kvm2
	I1028 11:37:13.187329   85575 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:37:13.187339   85575 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:37:13.188069   85575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:13.188145   85575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:37:13.202560   85575 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:37:13.202611   85575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:37:13.202894   85575 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:37:13.202933   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:13.202995   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:13.203008   85575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 11:37:13.203062   85575 start.go:340] cluster config:
	{Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:37:13.203189   85575 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:13.204774   85575 out.go:177] * Starting "addons-558164" primary control-plane node in "addons-558164" cluster
	I1028 11:37:13.205814   85575 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:13.205847   85575 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:37:13.205855   85575 cache.go:56] Caching tarball of preloaded images
	I1028 11:37:13.205931   85575 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:37:13.205945   85575 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:37:13.206228   85575 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json ...
	I1028 11:37:13.206247   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json: {Name:mk21e799f46066ed7eec2f0ed0902ce4db33f071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:13.206380   85575 start.go:360] acquireMachinesLock for addons-558164: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:37:13.206442   85575 start.go:364] duration metric: took 43.994µs to acquireMachinesLock for "addons-558164"
	I1028 11:37:13.206466   85575 start.go:93] Provisioning new machine with config: &{Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:37:13.206523   85575 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:37:13.208790   85575 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 11:37:13.208921   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:13.208967   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:13.222161   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1028 11:37:13.222539   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:13.223213   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:13.223233   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:13.223571   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:13.223759   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:13.223883   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:13.224030   85575 start.go:159] libmachine.API.Create for "addons-558164" (driver="kvm2")
	I1028 11:37:13.224052   85575 client.go:168] LocalClient.Create starting
	I1028 11:37:13.224081   85575 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:37:13.390440   85575 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:37:13.677081   85575 main.go:141] libmachine: Running pre-create checks...
	I1028 11:37:13.677110   85575 main.go:141] libmachine: (addons-558164) Calling .PreCreateCheck
	I1028 11:37:13.677544   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:13.678015   85575 main.go:141] libmachine: Creating machine...
	I1028 11:37:13.678031   85575 main.go:141] libmachine: (addons-558164) Calling .Create
	I1028 11:37:13.678162   85575 main.go:141] libmachine: (addons-558164) Creating KVM machine...
	I1028 11:37:13.679380   85575 main.go:141] libmachine: (addons-558164) DBG | found existing default KVM network
	I1028 11:37:13.680095   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.679930   85597 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1028 11:37:13.680120   85575 main.go:141] libmachine: (addons-558164) DBG | created network xml: 
	I1028 11:37:13.680132   85575 main.go:141] libmachine: (addons-558164) DBG | <network>
	I1028 11:37:13.680144   85575 main.go:141] libmachine: (addons-558164) DBG |   <name>mk-addons-558164</name>
	I1028 11:37:13.680157   85575 main.go:141] libmachine: (addons-558164) DBG |   <dns enable='no'/>
	I1028 11:37:13.680162   85575 main.go:141] libmachine: (addons-558164) DBG |   
	I1028 11:37:13.680168   85575 main.go:141] libmachine: (addons-558164) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:37:13.680174   85575 main.go:141] libmachine: (addons-558164) DBG |     <dhcp>
	I1028 11:37:13.680180   85575 main.go:141] libmachine: (addons-558164) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:37:13.680186   85575 main.go:141] libmachine: (addons-558164) DBG |     </dhcp>
	I1028 11:37:13.680191   85575 main.go:141] libmachine: (addons-558164) DBG |   </ip>
	I1028 11:37:13.680196   85575 main.go:141] libmachine: (addons-558164) DBG |   
	I1028 11:37:13.680202   85575 main.go:141] libmachine: (addons-558164) DBG | </network>
	I1028 11:37:13.680210   85575 main.go:141] libmachine: (addons-558164) DBG | 
	I1028 11:37:13.685189   85575 main.go:141] libmachine: (addons-558164) DBG | trying to create private KVM network mk-addons-558164 192.168.39.0/24...
	I1028 11:37:13.747391   85575 main.go:141] libmachine: (addons-558164) DBG | private KVM network mk-addons-558164 192.168.39.0/24 created
	I1028 11:37:13.747427   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.747359   85597 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:13.747446   85575 main.go:141] libmachine: (addons-558164) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 ...
	I1028 11:37:13.747465   85575 main.go:141] libmachine: (addons-558164) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:37:13.747482   85575 main.go:141] libmachine: (addons-558164) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:37:13.995999   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:13.995832   85597 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa...
	I1028 11:37:14.093373   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:14.093256   85597 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/addons-558164.rawdisk...
	I1028 11:37:14.093406   85575 main.go:141] libmachine: (addons-558164) DBG | Writing magic tar header
	I1028 11:37:14.093419   85575 main.go:141] libmachine: (addons-558164) DBG | Writing SSH key tar header
	I1028 11:37:14.093513   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:14.093414   85597 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 ...
	I1028 11:37:14.093562   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164
	I1028 11:37:14.093605   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164 (perms=drwx------)
	I1028 11:37:14.093626   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:37:14.093638   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:37:14.093649   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:37:14.093671   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:37:14.093685   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:37:14.093702   85575 main.go:141] libmachine: (addons-558164) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:37:14.093719   85575 main.go:141] libmachine: (addons-558164) Creating domain...
	I1028 11:37:14.093730   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:14.093751   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:37:14.093766   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:37:14.093779   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:37:14.093795   85575 main.go:141] libmachine: (addons-558164) DBG | Checking permissions on dir: /home
	I1028 11:37:14.093817   85575 main.go:141] libmachine: (addons-558164) DBG | Skipping /home - not owner
	I1028 11:37:14.094849   85575 main.go:141] libmachine: (addons-558164) define libvirt domain using xml: 
	I1028 11:37:14.094884   85575 main.go:141] libmachine: (addons-558164) <domain type='kvm'>
	I1028 11:37:14.094894   85575 main.go:141] libmachine: (addons-558164)   <name>addons-558164</name>
	I1028 11:37:14.094901   85575 main.go:141] libmachine: (addons-558164)   <memory unit='MiB'>4000</memory>
	I1028 11:37:14.094908   85575 main.go:141] libmachine: (addons-558164)   <vcpu>2</vcpu>
	I1028 11:37:14.094918   85575 main.go:141] libmachine: (addons-558164)   <features>
	I1028 11:37:14.094926   85575 main.go:141] libmachine: (addons-558164)     <acpi/>
	I1028 11:37:14.094935   85575 main.go:141] libmachine: (addons-558164)     <apic/>
	I1028 11:37:14.094942   85575 main.go:141] libmachine: (addons-558164)     <pae/>
	I1028 11:37:14.094951   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.094958   85575 main.go:141] libmachine: (addons-558164)   </features>
	I1028 11:37:14.094966   85575 main.go:141] libmachine: (addons-558164)   <cpu mode='host-passthrough'>
	I1028 11:37:14.094974   85575 main.go:141] libmachine: (addons-558164)   
	I1028 11:37:14.094989   85575 main.go:141] libmachine: (addons-558164)   </cpu>
	I1028 11:37:14.094997   85575 main.go:141] libmachine: (addons-558164)   <os>
	I1028 11:37:14.095006   85575 main.go:141] libmachine: (addons-558164)     <type>hvm</type>
	I1028 11:37:14.095037   85575 main.go:141] libmachine: (addons-558164)     <boot dev='cdrom'/>
	I1028 11:37:14.095052   85575 main.go:141] libmachine: (addons-558164)     <boot dev='hd'/>
	I1028 11:37:14.095060   85575 main.go:141] libmachine: (addons-558164)     <bootmenu enable='no'/>
	I1028 11:37:14.095069   85575 main.go:141] libmachine: (addons-558164)   </os>
	I1028 11:37:14.095082   85575 main.go:141] libmachine: (addons-558164)   <devices>
	I1028 11:37:14.095091   85575 main.go:141] libmachine: (addons-558164)     <disk type='file' device='cdrom'>
	I1028 11:37:14.095103   85575 main.go:141] libmachine: (addons-558164)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/boot2docker.iso'/>
	I1028 11:37:14.095112   85575 main.go:141] libmachine: (addons-558164)       <target dev='hdc' bus='scsi'/>
	I1028 11:37:14.095120   85575 main.go:141] libmachine: (addons-558164)       <readonly/>
	I1028 11:37:14.095127   85575 main.go:141] libmachine: (addons-558164)     </disk>
	I1028 11:37:14.095136   85575 main.go:141] libmachine: (addons-558164)     <disk type='file' device='disk'>
	I1028 11:37:14.095148   85575 main.go:141] libmachine: (addons-558164)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:37:14.095161   85575 main.go:141] libmachine: (addons-558164)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/addons-558164.rawdisk'/>
	I1028 11:37:14.095174   85575 main.go:141] libmachine: (addons-558164)       <target dev='hda' bus='virtio'/>
	I1028 11:37:14.095207   85575 main.go:141] libmachine: (addons-558164)     </disk>
	I1028 11:37:14.095229   85575 main.go:141] libmachine: (addons-558164)     <interface type='network'>
	I1028 11:37:14.095240   85575 main.go:141] libmachine: (addons-558164)       <source network='mk-addons-558164'/>
	I1028 11:37:14.095249   85575 main.go:141] libmachine: (addons-558164)       <model type='virtio'/>
	I1028 11:37:14.095261   85575 main.go:141] libmachine: (addons-558164)     </interface>
	I1028 11:37:14.095272   85575 main.go:141] libmachine: (addons-558164)     <interface type='network'>
	I1028 11:37:14.095284   85575 main.go:141] libmachine: (addons-558164)       <source network='default'/>
	I1028 11:37:14.095295   85575 main.go:141] libmachine: (addons-558164)       <model type='virtio'/>
	I1028 11:37:14.095305   85575 main.go:141] libmachine: (addons-558164)     </interface>
	I1028 11:37:14.095315   85575 main.go:141] libmachine: (addons-558164)     <serial type='pty'>
	I1028 11:37:14.095334   85575 main.go:141] libmachine: (addons-558164)       <target port='0'/>
	I1028 11:37:14.095350   85575 main.go:141] libmachine: (addons-558164)     </serial>
	I1028 11:37:14.095375   85575 main.go:141] libmachine: (addons-558164)     <console type='pty'>
	I1028 11:37:14.095414   85575 main.go:141] libmachine: (addons-558164)       <target type='serial' port='0'/>
	I1028 11:37:14.095428   85575 main.go:141] libmachine: (addons-558164)     </console>
	I1028 11:37:14.095435   85575 main.go:141] libmachine: (addons-558164)     <rng model='virtio'>
	I1028 11:37:14.095448   85575 main.go:141] libmachine: (addons-558164)       <backend model='random'>/dev/random</backend>
	I1028 11:37:14.095457   85575 main.go:141] libmachine: (addons-558164)     </rng>
	I1028 11:37:14.095465   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.095473   85575 main.go:141] libmachine: (addons-558164)     
	I1028 11:37:14.095487   85575 main.go:141] libmachine: (addons-558164)   </devices>
	I1028 11:37:14.095498   85575 main.go:141] libmachine: (addons-558164) </domain>
	I1028 11:37:14.095511   85575 main.go:141] libmachine: (addons-558164) 
	I1028 11:37:14.099642   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:88:22:dc in network default
	I1028 11:37:14.100233   85575 main.go:141] libmachine: (addons-558164) Ensuring networks are active...
	I1028 11:37:14.100252   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:14.100920   85575 main.go:141] libmachine: (addons-558164) Ensuring network default is active
	I1028 11:37:14.101235   85575 main.go:141] libmachine: (addons-558164) Ensuring network mk-addons-558164 is active
	I1028 11:37:14.101734   85575 main.go:141] libmachine: (addons-558164) Getting domain xml...
	I1028 11:37:14.102491   85575 main.go:141] libmachine: (addons-558164) Creating domain...
	I1028 11:37:15.278262   85575 main.go:141] libmachine: (addons-558164) Waiting to get IP...
	I1028 11:37:15.279050   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.279449   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.279505   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.279446   85597 retry.go:31] will retry after 250.712213ms: waiting for machine to come up
	I1028 11:37:15.531891   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.532440   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.532469   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.532382   85597 retry.go:31] will retry after 317.721645ms: waiting for machine to come up
	I1028 11:37:15.851968   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:15.852430   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:15.852452   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:15.852389   85597 retry.go:31] will retry after 416.193792ms: waiting for machine to come up
	I1028 11:37:16.269654   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:16.270164   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:16.270206   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:16.270104   85597 retry.go:31] will retry after 596.082177ms: waiting for machine to come up
	I1028 11:37:16.867870   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:16.868226   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:16.868257   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:16.868167   85597 retry.go:31] will retry after 494.569738ms: waiting for machine to come up
	I1028 11:37:17.364782   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:17.365180   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:17.365211   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:17.365125   85597 retry.go:31] will retry after 705.333219ms: waiting for machine to come up
	I1028 11:37:18.071942   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:18.072306   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:18.072337   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:18.072244   85597 retry.go:31] will retry after 1.035817145s: waiting for machine to come up
	I1028 11:37:19.110041   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:19.110516   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:19.110541   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:19.110475   85597 retry.go:31] will retry after 1.293081461s: waiting for machine to come up
	I1028 11:37:20.405970   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:20.406392   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:20.406424   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:20.406321   85597 retry.go:31] will retry after 1.126472716s: waiting for machine to come up
	I1028 11:37:21.534558   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:21.534916   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:21.534974   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:21.534896   85597 retry.go:31] will retry after 1.87018139s: waiting for machine to come up
	I1028 11:37:23.406775   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:23.407187   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:23.407213   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:23.407138   85597 retry.go:31] will retry after 2.417463202s: waiting for machine to come up
	I1028 11:37:25.827684   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:25.828209   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:25.828238   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:25.828148   85597 retry.go:31] will retry after 2.584942589s: waiting for machine to come up
	I1028 11:37:28.414400   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:28.414749   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:28.414779   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:28.414696   85597 retry.go:31] will retry after 2.884443891s: waiting for machine to come up
	I1028 11:37:31.300952   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:31.301311   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find current IP address of domain addons-558164 in network mk-addons-558164
	I1028 11:37:31.301334   85575 main.go:141] libmachine: (addons-558164) DBG | I1028 11:37:31.301273   85597 retry.go:31] will retry after 3.721637101s: waiting for machine to come up
	I1028 11:37:35.024742   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.025083   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has current primary IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.025102   85575 main.go:141] libmachine: (addons-558164) Found IP for machine: 192.168.39.31
	I1028 11:37:35.025117   85575 main.go:141] libmachine: (addons-558164) Reserving static IP address...
	I1028 11:37:35.025876   85575 main.go:141] libmachine: (addons-558164) DBG | unable to find host DHCP lease matching {name: "addons-558164", mac: "52:54:00:8d:cc:de", ip: "192.168.39.31"} in network mk-addons-558164
	I1028 11:37:35.135135   85575 main.go:141] libmachine: (addons-558164) DBG | Getting to WaitForSSH function...
	I1028 11:37:35.135165   85575 main.go:141] libmachine: (addons-558164) Reserved static IP address: 192.168.39.31
	I1028 11:37:35.135177   85575 main.go:141] libmachine: (addons-558164) Waiting for SSH to be available...
	I1028 11:37:35.138161   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.138638   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.138678   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.138909   85575 main.go:141] libmachine: (addons-558164) DBG | Using SSH client type: external
	I1028 11:37:35.138937   85575 main.go:141] libmachine: (addons-558164) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa (-rw-------)
	I1028 11:37:35.139001   85575 main.go:141] libmachine: (addons-558164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:37:35.139028   85575 main.go:141] libmachine: (addons-558164) DBG | About to run SSH command:
	I1028 11:37:35.139042   85575 main.go:141] libmachine: (addons-558164) DBG | exit 0
	I1028 11:37:35.259192   85575 main.go:141] libmachine: (addons-558164) DBG | SSH cmd err, output: <nil>: 
	I1028 11:37:35.259437   85575 main.go:141] libmachine: (addons-558164) KVM machine creation complete!
	I1028 11:37:35.259799   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:35.292576   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:35.292859   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:35.293064   85575 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:37:35.293082   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:35.294472   85575 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:37:35.294486   85575 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:37:35.294491   85575 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:37:35.294498   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.296816   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.297176   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.297203   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.297343   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.297533   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.297690   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.298024   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.298215   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.298492   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.298509   85575 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:37:35.394625   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:37:35.394652   85575 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:37:35.394662   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.397428   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.397774   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.397800   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.398016   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.398179   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.398336   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.398480   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.398643   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.398816   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.398826   85575 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:37:35.491679   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:37:35.491746   85575 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:37:35.491756   85575 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:37:35.491767   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.492003   85575 buildroot.go:166] provisioning hostname "addons-558164"
	I1028 11:37:35.492039   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.492227   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.495011   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.495361   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.495390   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.495551   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.495743   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.495892   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.496022   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.496172   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.496348   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.496365   85575 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-558164 && echo "addons-558164" | sudo tee /etc/hostname
	I1028 11:37:35.603899   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-558164
	
	I1028 11:37:35.603949   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.606744   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.607225   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.607253   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.607425   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.607620   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.607799   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.607922   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.608076   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.608244   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.608264   85575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-558164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-558164/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-558164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:37:35.710980   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:37:35.711019   85575 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:37:35.711052   85575 buildroot.go:174] setting up certificates
	I1028 11:37:35.711070   85575 provision.go:84] configureAuth start
	I1028 11:37:35.711088   85575 main.go:141] libmachine: (addons-558164) Calling .GetMachineName
	I1028 11:37:35.711352   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:35.714111   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.714470   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.714501   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.714632   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.717095   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.717381   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.717406   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.717530   85575 provision.go:143] copyHostCerts
	I1028 11:37:35.717622   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:37:35.717771   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:37:35.717853   85575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:37:35.717926   85575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.addons-558164 san=[127.0.0.1 192.168.39.31 addons-558164 localhost minikube]
	I1028 11:37:35.781569   85575 provision.go:177] copyRemoteCerts
	I1028 11:37:35.781633   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:37:35.781659   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.783888   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.784201   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.784229   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.784401   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.784583   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.784742   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.784874   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:35.860777   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:37:35.883519   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:37:35.904258   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:37:35.924625   85575 provision.go:87] duration metric: took 213.535974ms to configureAuth
	I1028 11:37:35.924657   85575 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:37:35.924853   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:35.924941   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:35.927290   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.927668   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:35.927691   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:35.927841   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:35.928034   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.928173   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:35.928290   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:35.928465   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:35.928667   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:35.928687   85575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:37:36.147947   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:37:36.147982   85575 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:37:36.147992   85575 main.go:141] libmachine: (addons-558164) Calling .GetURL
	I1028 11:37:36.149469   85575 main.go:141] libmachine: (addons-558164) DBG | Using libvirt version 6000000
	I1028 11:37:36.151424   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.151838   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.151864   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.152014   85575 main.go:141] libmachine: Docker is up and running!
	I1028 11:37:36.152029   85575 main.go:141] libmachine: Reticulating splines...
	I1028 11:37:36.152038   85575 client.go:171] duration metric: took 22.927977306s to LocalClient.Create
	I1028 11:37:36.152061   85575 start.go:167] duration metric: took 22.928033489s to libmachine.API.Create "addons-558164"
	I1028 11:37:36.152080   85575 start.go:293] postStartSetup for "addons-558164" (driver="kvm2")
	I1028 11:37:36.152093   85575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:37:36.152109   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.152344   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:37:36.152371   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.154565   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.154930   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.154963   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.155094   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.155278   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.155459   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.155698   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.233438   85575 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:37:36.237296   85575 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:37:36.237320   85575 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:37:36.237394   85575 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:37:36.237419   85575 start.go:296] duration metric: took 85.331265ms for postStartSetup
	I1028 11:37:36.237457   85575 main.go:141] libmachine: (addons-558164) Calling .GetConfigRaw
	I1028 11:37:36.238016   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:36.240377   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.240705   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.240732   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.240955   85575 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/config.json ...
	I1028 11:37:36.241167   85575 start.go:128] duration metric: took 23.034632595s to createHost
	I1028 11:37:36.241194   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.244091   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.244450   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.244488   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.244591   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.244780   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.244996   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.245172   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.245329   85575 main.go:141] libmachine: Using SSH client type: native
	I1028 11:37:36.245498   85575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1028 11:37:36.245508   85575 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:37:36.339913   85575 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730115456.310135737
	
	I1028 11:37:36.339939   85575 fix.go:216] guest clock: 1730115456.310135737
	I1028 11:37:36.339947   85575 fix.go:229] Guest: 2024-10-28 11:37:36.310135737 +0000 UTC Remote: 2024-10-28 11:37:36.24118174 +0000 UTC m=+23.137199363 (delta=68.953997ms)
	I1028 11:37:36.340002   85575 fix.go:200] guest clock delta is within tolerance: 68.953997ms
	I1028 11:37:36.340011   85575 start.go:83] releasing machines lock for "addons-558164", held for 23.13355684s
	I1028 11:37:36.340036   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.340295   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:36.342913   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.343237   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.343259   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.343506   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344046   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344234   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:36.344348   85575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:37:36.344401   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.344446   85575 ssh_runner.go:195] Run: cat /version.json
	I1028 11:37:36.344473   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:36.347120   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347316   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347474   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.347498   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347596   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.347718   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:36.347742   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:36.347784   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.347916   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:36.348108   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.348116   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:36.348286   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:36.348286   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.348406   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:36.441152   85575 ssh_runner.go:195] Run: systemctl --version
	I1028 11:37:36.446504   85575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:37:36.605420   85575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:37:36.610633   85575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:37:36.610713   85575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:37:36.625351   85575 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:37:36.625384   85575 start.go:495] detecting cgroup driver to use...
	I1028 11:37:36.625456   85575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:37:36.641617   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:37:36.654291   85575 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:37:36.654362   85575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:37:36.666862   85575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:37:36.679447   85575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:37:36.797484   85575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:37:36.953333   85575 docker.go:233] disabling docker service ...
	I1028 11:37:36.953402   85575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:37:36.966757   85575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:37:36.978803   85575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:37:37.089989   85575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:37:37.199847   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:37:37.212373   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:37:37.228120   85575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:37:37.228179   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.237080   85575 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:37:37.237159   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.245953   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.255758   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.264822   85575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:37:37.274024   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.283082   85575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.297348   85575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:37:37.306307   85575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:37:37.314805   85575 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:37:37.314889   85575 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:37:37.326781   85575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:37:37.334931   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:37.443819   85575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:37:37.530697   85575 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:37:37.530800   85575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:37:37.535943   85575 start.go:563] Will wait 60s for crictl version
	I1028 11:37:37.536007   85575 ssh_runner.go:195] Run: which crictl
	I1028 11:37:37.539123   85575 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:37:37.577928   85575 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:37:37.578010   85575 ssh_runner.go:195] Run: crio --version
	I1028 11:37:37.602930   85575 ssh_runner.go:195] Run: crio --version
	I1028 11:37:37.630286   85575 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:37:37.631671   85575 main.go:141] libmachine: (addons-558164) Calling .GetIP
	I1028 11:37:37.634296   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:37.634682   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:37.634708   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:37.634885   85575 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:37:37.638588   85575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:37:37.650531   85575 kubeadm.go:883] updating cluster {Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:37:37.650700   85575 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:37.650770   85575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:37:37.680617   85575 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:37:37.680703   85575 ssh_runner.go:195] Run: which lz4
	I1028 11:37:37.684302   85575 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:37:37.688153   85575 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:37:37.688185   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:37:38.818986   85575 crio.go:462] duration metric: took 1.134706489s to copy over tarball
	I1028 11:37:38.819058   85575 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:37:40.812379   85575 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.993288544s)
	I1028 11:37:40.812417   85575 crio.go:469] duration metric: took 1.993400575s to extract the tarball
	I1028 11:37:40.812430   85575 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:37:40.847345   85575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:37:40.888064   85575 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:37:40.888091   85575 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:37:40.888100   85575 kubeadm.go:934] updating node { 192.168.39.31 8443 v1.31.2 crio true true} ...
	I1028 11:37:40.888220   85575 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-558164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:37:40.888286   85575 ssh_runner.go:195] Run: crio config
	I1028 11:37:40.928934   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:40.928963   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:40.928975   85575 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:37:40.928998   85575 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-558164 NodeName:addons-558164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:37:40.929115   85575 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-558164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:37:40.929174   85575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:37:40.937796   85575 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:37:40.937872   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:37:40.945990   85575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:37:40.960178   85575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:37:40.974016   85575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1028 11:37:40.988538   85575 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I1028 11:37:40.991680   85575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:37:41.001997   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:41.121146   85575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:37:41.137478   85575 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164 for IP: 192.168.39.31
	I1028 11:37:41.137524   85575 certs.go:194] generating shared ca certs ...
	I1028 11:37:41.137549   85575 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.137730   85575 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:37:41.323762   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt ...
	I1028 11:37:41.323796   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt: {Name:mkc0b2b57f64ada4d969dda25941c2328582eade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.323973   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key ...
	I1028 11:37:41.323985   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key: {Name:mkd279fafe08c0316b34fd1a2897fb0bb5a048b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.324068   85575 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:37:41.737932   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt ...
	I1028 11:37:41.737964   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt: {Name:mk06bd09f2b619ede58b750d31dd90943c21f399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.738120   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key ...
	I1028 11:37:41.738131   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key: {Name:mk6537beaff0b053e2949ae2b84d3eccb7a6f708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.738197   85575 certs.go:256] generating profile certs ...
	I1028 11:37:41.738281   85575 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key
	I1028 11:37:41.738304   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt with IP's: []
	I1028 11:37:41.926331   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt ...
	I1028 11:37:41.926366   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: {Name:mk977f2dcc9ff37f478f6ba4fe9575f6afa3b18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.926561   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key ...
	I1028 11:37:41.926576   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.key: {Name:mkd1f3a0b2154057485d76e9d5fc3969b2573f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:41.926684   85575 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb
	I1028 11:37:41.926705   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31]
	I1028 11:37:42.277136   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb ...
	I1028 11:37:42.277167   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb: {Name:mk5aa529d00b15c94fe638a9c72f96545f1c3feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.277347   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb ...
	I1028 11:37:42.277364   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb: {Name:mkd0d87bc729a17c13c7130430c8595841656296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.277461   85575 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt.90d3d1fb -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt
	I1028 11:37:42.277541   85575 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key.90d3d1fb -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key
	I1028 11:37:42.277586   85575 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key
	I1028 11:37:42.277607   85575 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt with IP's: []
	I1028 11:37:42.480451   85575 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt ...
	I1028 11:37:42.480481   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt: {Name:mkb2e69f56c32095c770f87f4c5341b28506e6dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.480666   85575 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key ...
	I1028 11:37:42.480682   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key: {Name:mk519fae37889f93fa2ec24cc1ac335732e57d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:42.480887   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:37:42.480922   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:37:42.480948   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:37:42.480972   85575 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:37:42.481579   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:37:42.504552   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:37:42.528760   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:37:42.549633   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:37:42.571625   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 11:37:42.593780   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:37:42.615851   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:37:42.636545   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:37:42.656381   85575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:37:42.676246   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:37:42.690571   85575 ssh_runner.go:195] Run: openssl version
	I1028 11:37:42.696147   85575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:37:42.709644   85575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.713840   85575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.713910   85575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:37:42.720910   85575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:37:42.731927   85575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:37:42.736721   85575 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:37:42.736775   85575 kubeadm.go:392] StartCluster: {Name:addons-558164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-558164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:37:42.736879   85575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:37:42.736927   85575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:37:42.778782   85575 cri.go:89] found id: ""
	I1028 11:37:42.778859   85575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:37:42.787834   85575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:37:42.796343   85575 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:37:42.804728   85575 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:37:42.804747   85575 kubeadm.go:157] found existing configuration files:
	
	I1028 11:37:42.804798   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:37:42.812654   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:37:42.812709   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:37:42.820889   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:37:42.829261   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:37:42.829315   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:37:42.839320   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:37:42.847009   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:37:42.847054   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:37:42.855175   85575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:37:42.862972   85575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:37:42.863017   85575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:37:42.870991   85575 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:37:43.011545   85575 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:37:53.238133   85575 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:37:53.238238   85575 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:37:53.238337   85575 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:37:53.238483   85575 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:37:53.238600   85575 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:37:53.238661   85575 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:37:53.240225   85575 out.go:235]   - Generating certificates and keys ...
	I1028 11:37:53.240295   85575 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:37:53.240364   85575 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:37:53.240450   85575 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:37:53.240514   85575 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:37:53.240598   85575 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:37:53.240668   85575 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:37:53.240748   85575 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:37:53.240877   85575 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-558164 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1028 11:37:53.240926   85575 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:37:53.241040   85575 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-558164 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1028 11:37:53.241119   85575 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:37:53.241185   85575 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:37:53.241227   85575 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:37:53.241291   85575 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:37:53.241372   85575 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:37:53.241448   85575 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:37:53.241518   85575 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:37:53.241605   85575 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:37:53.241686   85575 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:37:53.241797   85575 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:37:53.241886   85575 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:37:53.243281   85575 out.go:235]   - Booting up control plane ...
	I1028 11:37:53.243361   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:37:53.243458   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:37:53.243575   85575 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:37:53.243691   85575 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:37:53.243770   85575 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:37:53.243805   85575 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:37:53.243916   85575 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:37:53.244006   85575 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:37:53.244056   85575 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.843329ms
	I1028 11:37:53.244116   85575 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:37:53.244165   85575 kubeadm.go:310] [api-check] The API server is healthy after 5.501539291s
	I1028 11:37:53.244252   85575 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:37:53.244371   85575 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:37:53.244429   85575 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:37:53.244584   85575 kubeadm.go:310] [mark-control-plane] Marking the node addons-558164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:37:53.244682   85575 kubeadm.go:310] [bootstrap-token] Using token: p1t5xv.9jomyucun3sgp4xz
	I1028 11:37:53.246786   85575 out.go:235]   - Configuring RBAC rules ...
	I1028 11:37:53.246907   85575 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:37:53.247004   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:37:53.247141   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:37:53.247279   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:37:53.247386   85575 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:37:53.247461   85575 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:37:53.247561   85575 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:37:53.247616   85575 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:37:53.247684   85575 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:37:53.247692   85575 kubeadm.go:310] 
	I1028 11:37:53.247741   85575 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:37:53.247750   85575 kubeadm.go:310] 
	I1028 11:37:53.247856   85575 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:37:53.247869   85575 kubeadm.go:310] 
	I1028 11:37:53.247904   85575 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:37:53.247995   85575 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:37:53.248073   85575 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:37:53.248083   85575 kubeadm.go:310] 
	I1028 11:37:53.248156   85575 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:37:53.248171   85575 kubeadm.go:310] 
	I1028 11:37:53.248246   85575 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:37:53.248260   85575 kubeadm.go:310] 
	I1028 11:37:53.248335   85575 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:37:53.248437   85575 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:37:53.248531   85575 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:37:53.248549   85575 kubeadm.go:310] 
	I1028 11:37:53.248659   85575 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:37:53.248759   85575 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:37:53.248769   85575 kubeadm.go:310] 
	I1028 11:37:53.248870   85575 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p1t5xv.9jomyucun3sgp4xz \
	I1028 11:37:53.248991   85575 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:37:53.249022   85575 kubeadm.go:310] 	--control-plane 
	I1028 11:37:53.249038   85575 kubeadm.go:310] 
	I1028 11:37:53.249179   85575 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:37:53.249198   85575 kubeadm.go:310] 
	I1028 11:37:53.249331   85575 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p1t5xv.9jomyucun3sgp4xz \
	I1028 11:37:53.249503   85575 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:37:53.249524   85575 cni.go:84] Creating CNI manager for ""
	I1028 11:37:53.249533   85575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:53.251860   85575 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 11:37:53.253071   85575 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 11:37:53.263814   85575 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 11:37:53.281245   85575 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:37:53.281300   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:53.281348   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-558164 minikube.k8s.io/updated_at=2024_10_28T11_37_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=addons-558164 minikube.k8s.io/primary=true
	I1028 11:37:53.421113   85575 ops.go:34] apiserver oom_adj: -16
	I1028 11:37:53.421212   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:53.921641   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:54.421611   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:54.921346   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:55.422221   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:55.921481   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:56.422289   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:56.922232   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:57.421532   85575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:37:57.523884   85575 kubeadm.go:1113] duration metric: took 4.242633176s to wait for elevateKubeSystemPrivileges
	I1028 11:37:57.523927   85575 kubeadm.go:394] duration metric: took 14.787157354s to StartCluster
	I1028 11:37:57.523950   85575 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:57.524080   85575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:37:57.524467   85575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:37:57.524678   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:37:57.524687   85575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:37:57.524766   85575 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 11:37:57.524903   85575 addons.go:69] Setting inspektor-gadget=true in profile "addons-558164"
	I1028 11:37:57.524922   85575 addons.go:69] Setting metrics-server=true in profile "addons-558164"
	I1028 11:37:57.524920   85575 addons.go:69] Setting default-storageclass=true in profile "addons-558164"
	I1028 11:37:57.524936   85575 addons.go:234] Setting addon inspektor-gadget=true in "addons-558164"
	I1028 11:37:57.524934   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:57.524945   85575 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-558164"
	I1028 11:37:57.524951   85575 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-558164"
	I1028 11:37:57.524958   85575 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-558164"
	I1028 11:37:57.524959   85575 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-558164"
	I1028 11:37:57.524965   85575 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-558164"
	I1028 11:37:57.524970   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524963   85575 addons.go:69] Setting storage-provisioner=true in profile "addons-558164"
	I1028 11:37:57.524992   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524994   85575 addons.go:69] Setting volcano=true in profile "addons-558164"
	I1028 11:37:57.525005   85575 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-558164"
	I1028 11:37:57.525008   85575 addons.go:234] Setting addon volcano=true in "addons-558164"
	I1028 11:37:57.525016   85575 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-558164"
	I1028 11:37:57.525037   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525049   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525062   85575 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-558164"
	I1028 11:37:57.525094   85575 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-558164"
	I1028 11:37:57.525120   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.524902   85575 addons.go:69] Setting yakd=true in profile "addons-558164"
	I1028 11:37:57.525419   85575 addons.go:69] Setting registry=true in profile "addons-558164"
	I1028 11:37:57.525429   85575 addons.go:234] Setting addon yakd=true in "addons-558164"
	I1028 11:37:57.525431   85575 addons.go:234] Setting addon registry=true in "addons-558164"
	I1028 11:37:57.525432   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525438   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525448   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525452   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525471   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525480   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525491   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525520   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525581   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525593   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525602   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525613   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525650   85575 addons.go:69] Setting volumesnapshots=true in profile "addons-558164"
	I1028 11:37:57.525663   85575 addons.go:234] Setting addon volumesnapshots=true in "addons-558164"
	I1028 11:37:57.524936   85575 addons.go:234] Setting addon metrics-server=true in "addons-558164"
	I1028 11:37:57.524933   85575 addons.go:69] Setting cloud-spanner=true in profile "addons-558164"
	I1028 11:37:57.525676   85575 addons.go:69] Setting ingress=true in profile "addons-558164"
	I1028 11:37:57.525682   85575 addons.go:234] Setting addon cloud-spanner=true in "addons-558164"
	I1028 11:37:57.525685   85575 addons.go:234] Setting addon ingress=true in "addons-558164"
	I1028 11:37:57.525696   85575 addons.go:69] Setting gcp-auth=true in profile "addons-558164"
	I1028 11:37:57.525700   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525713   85575 mustload.go:65] Loading cluster: addons-558164
	I1028 11:37:57.525718   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525721   85575 addons.go:69] Setting ingress-dns=true in profile "addons-558164"
	I1028 11:37:57.525732   85575 addons.go:234] Setting addon ingress-dns=true in "addons-558164"
	I1028 11:37:57.524993   85575 addons.go:234] Setting addon storage-provisioner=true in "addons-558164"
	I1028 11:37:57.525849   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525874   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.525886   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525902   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.525884   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.525931   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526095   85575 config.go:182] Loaded profile config "addons-558164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:37:57.526235   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526268   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526305   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526318   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526335   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.526348   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526435   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526463   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.526304   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.526951   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.527039   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.527442   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.527482   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.527535   85575 out.go:177] * Verifying Kubernetes components...
	I1028 11:37:57.528396   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.528790   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.528833   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.529043   85575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:37:57.546735   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I1028 11:37:57.546823   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I1028 11:37:57.546911   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I1028 11:37:57.547419   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.547543   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.547655   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I1028 11:37:57.547740   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1028 11:37:57.548023   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.548049   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.548125   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.548155   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.548197   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.548261   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I1028 11:37:57.548540   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.548577   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.548881   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.548902   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.548971   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.549020   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549125   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549402   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.549495   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.549532   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.550029   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550044   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550095   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550194   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550204   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550313   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550325   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550422   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.550431   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.550620   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550854   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550925   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.550972   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.551188   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.551372   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.551404   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.551590   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.551647   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.553173   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.553210   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.572094   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I1028 11:37:57.572317   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.572388   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.572594   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.573305   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.573330   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.573812   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.574320   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.574363   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.574611   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I1028 11:37:57.576260   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I1028 11:37:57.576690   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.585427   85575 addons.go:234] Setting addon default-storageclass=true in "addons-558164"
	I1028 11:37:57.585489   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.590207   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1028 11:37:57.590224   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590379   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1028 11:37:57.590443   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I1028 11:37:57.590765   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.590785   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.590902   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590935   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.590978   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.591192   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591216   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591683   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.591775   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591776   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591790   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591788   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.591792   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.591807   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.592134   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592182   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.592205   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592255   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.592300   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.592359   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.592424   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.592595   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.592637   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.607823   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.608071   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.608117   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.608391   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I1028 11:37:57.608821   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.609013   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1028 11:37:57.609390   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.609428   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.609555   85575 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-558164"
	I1028 11:37:57.609596   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.609953   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.609994   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.610042   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.610590   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.610674   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.610691   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.611109   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.611122   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.611145   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.611230   85575 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 11:37:57.611385   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.611469   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.612448   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.612488   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.612705   85575 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:37:57.612729   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 11:37:57.612754   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.613701   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1028 11:37:57.614434   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.615428   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.615449   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.616704   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.616728   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I1028 11:37:57.616743   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.617094   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.617275   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.617307   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.617514   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.617536   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.617514   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I1028 11:37:57.617963   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.617971   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I1028 11:37:57.618039   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618078   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I1028 11:37:57.618151   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.618530   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618538   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.618627   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.618674   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I1028 11:37:57.619018   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.619037   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.619037   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.619070   85575 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 11:37:57.619170   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.619394   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.619416   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.619439   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.620019   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.620060   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.620066   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.620081   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.620451   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.620649   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.620663   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.620910   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.621556   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.621643   85575 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 11:37:57.621875   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.622311   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.622564   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.622754   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:37:57.623155   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.623185   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.623395   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39181
	I1028 11:37:57.623774   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.623907   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.624118   85575 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 11:37:57.624137   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 11:37:57.624167   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.624201   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.624375   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.624392   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.624464   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:57.624472   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:57.624586   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.624596   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.624650   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:57.624668   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:57.624674   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:57.624682   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:57.624689   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:57.624955   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:57.624969   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	W1028 11:37:57.625044   85575 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 11:37:57.625352   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.625413   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.625450   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.626035   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.626066   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.626568   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.627732   85575 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:37:57.628209   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.628459   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.628888   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.628912   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.628955   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.629232   85575 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:37:57.629251   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:37:57.629268   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.629433   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.629625   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.629877   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.630193   85575 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 11:37:57.631478   85575 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:37:57.631496   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 11:37:57.631514   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.634013   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634353   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.634374   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634884   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.634954   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.635136   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.635298   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.635447   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.636282   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.636306   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.636488   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.636658   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.636812   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.636864   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1028 11:37:57.637122   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.637369   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.637823   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.637840   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.638233   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.638448   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.639967   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.641767   85575 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 11:37:57.642074   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1028 11:37:57.642398   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.642824   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.642840   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.643039   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 11:37:57.643052   85575 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 11:37:57.643068   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.643199   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.643366   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.645141   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.646575   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.646682   85575 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 11:37:57.646895   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I1028 11:37:57.647180   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.647205   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.647370   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.647463   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.647648   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.647802   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.648068   85575 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 11:37:57.648088   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 11:37:57.648104   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.648250   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.648261   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.647955   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.648594   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.648812   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.650873   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.652020   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I1028 11:37:57.652145   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.652497   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.652619   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I1028 11:37:57.652739   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 11:37:57.652946   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.652963   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.653096   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.653434   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.653462   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.653496   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.653633   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.653645   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.653699   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.653901   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.654070   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.654133   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.655172   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 11:37:57.655372   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1028 11:37:57.655407   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.656848   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.656973   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.657144   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.657561   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.657580   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.657724   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 11:37:57.657993   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.658052   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.658745   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I1028 11:37:57.659232   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.659270   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.659964   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.660466   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.660483   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.660882   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.660959   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 11:37:57.661106   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.661604   85575 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 11:37:57.661719   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I1028 11:37:57.662565   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.662994   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.663225   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.663243   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.663356   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 11:37:57.663448   85575 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:37:57.663471   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 11:37:57.663494   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.663784   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.664615   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:37:57.664653   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:37:57.664766   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 11:37:57.665819   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 11:37:57.666045   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I1028 11:37:57.666054   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 11:37:57.666068   85575 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 11:37:57.666086   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.666410   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.666626   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.666895   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.666912   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.667118   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1028 11:37:57.667368   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.667588   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.667817   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.667926   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.668049   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.668125   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 11:37:57.668164   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.668178   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.668713   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.668779   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.669034   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.669093   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.669888   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.670891   85575 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 11:37:57.671494   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.671582   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.672087   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.672115   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.672152   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.672341   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.672487   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.672797   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.673070   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.673100   85575 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 11:37:57.673209   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 11:37:57.673231   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 11:37:57.673249   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.673464   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1028 11:37:57.673661   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.674096   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.674598   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 11:37:57.674614   85575 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 11:37:57.674630   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.674714   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.674742   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.675074   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.675257   85575 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 11:37:57.675290   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.676448   85575 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 11:37:57.676471   85575 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 11:37:57.676489   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.676793   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.677190   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.677216   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.677366   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.677428   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.677847   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.678025   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.678043   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.678332   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.678641   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.678710   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.678725   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.678821   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.678914   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.679001   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.679113   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 11:37:57.680246   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:37:57.681459   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:37:57.682177   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.682603   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.682635   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.682739   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.682916   85575 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:37:57.682942   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 11:37:57.682961   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.682917   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.683113   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.683254   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	W1028 11:37:57.684067   85575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42158->192.168.39.31:22: read: connection reset by peer
	I1028 11:37:57.684109   85575 retry.go:31] will retry after 143.190095ms: ssh: handshake failed: read tcp 192.168.39.1:42158->192.168.39.31:22: read: connection reset by peer
	I1028 11:37:57.685684   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I1028 11:37:57.685765   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.686028   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.686046   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.686207   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.686262   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.686398   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.686505   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.686615   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.687337   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.687352   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.687910   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.688144   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.689530   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.691301   85575 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 11:37:57.692727   85575 out.go:177]   - Using image docker.io/busybox:stable
	I1028 11:37:57.693657   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I1028 11:37:57.694005   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:37:57.694236   85575 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:37:57.694251   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 11:37:57.694266   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.695402   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:37:57.695432   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:37:57.695809   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:37:57.696062   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:37:57.697374   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.697550   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:37:57.697809   85575 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:37:57.697822   85575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:37:57.697837   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:37:57.697895   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.697908   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.698034   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.698154   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.698236   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.698308   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:57.700696   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.700996   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:37:57.701013   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:37:57.701146   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:37:57.701267   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:37:57.701467   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:37:57.701576   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:37:58.013203   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 11:37:58.031335   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 11:37:58.056849   85575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:37:58.056930   85575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:37:58.094381   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 11:37:58.097831   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 11:37:58.100436   85575 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 11:37:58.100459   85575 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 11:37:58.143514   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:37:58.144997   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:37:58.175948   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 11:37:58.175974   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 11:37:58.191313   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 11:37:58.191331   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 11:37:58.236374   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 11:37:58.259145   85575 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:37:58.259164   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 11:37:58.259319   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 11:37:58.293886   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 11:37:58.293923   85575 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 11:37:58.298064   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 11:37:58.298084   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 11:37:58.317562   85575 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:37:58.317582   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 11:37:58.353243   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 11:37:58.353271   85575 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 11:37:58.361839   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 11:37:58.361865   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 11:37:58.390477   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 11:37:58.390512   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 11:37:58.438189   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 11:37:58.482073   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 11:37:58.482110   85575 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 11:37:58.494416   85575 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:37:58.494444   85575 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 11:37:58.519960   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 11:37:58.586375   85575 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 11:37:58.586409   85575 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 11:37:58.611580   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 11:37:58.611604   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 11:37:58.707599   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 11:37:58.707642   85575 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 11:37:58.743699   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:37:58.781401   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 11:37:58.781437   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 11:37:58.793594   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 11:37:58.793626   85575 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 11:37:58.809062   85575 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:37:58.809084   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 11:37:58.918922   85575 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 11:37:58.918970   85575 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 11:37:58.974367   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 11:37:59.039594   85575 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:37:59.039626   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 11:37:59.212018   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 11:37:59.212045   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 11:37:59.383529   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:37:59.534469   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.521229343s)
	I1028 11:37:59.534535   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.503174377s)
	I1028 11:37:59.534542   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534555   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.534564   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534644   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.534886   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:59.534931   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.534941   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.534949   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.534955   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.535048   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.535060   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.535068   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:37:59.535079   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:37:59.535200   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.535281   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.536627   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:37:59.536649   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:37:59.536659   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:37:59.597137   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 11:37:59.597168   85575 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 11:37:59.864910   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 11:37:59.864942   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 11:38:00.039475   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 11:38:00.039500   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 11:38:00.170878   85575 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:38:00.170906   85575 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 11:38:00.455413   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 11:38:00.831213   85575 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.774321855s)
	I1028 11:38:00.831263   85575 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.774281707s)
	I1028 11:38:00.831293   85575 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:38:00.831978   85575 node_ready.go:35] waiting up to 6m0s for node "addons-558164" to be "Ready" ...
	I1028 11:38:00.835743   85575 node_ready.go:49] node "addons-558164" has status "Ready":"True"
	I1028 11:38:00.835761   85575 node_ready.go:38] duration metric: took 3.761887ms for node "addons-558164" to be "Ready" ...
	I1028 11:38:00.835769   85575 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:38:00.854421   85575 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:01.416145   85575 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-558164" context rescaled to 1 replicas
	I1028 11:38:02.925015   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:02.967619   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.873194397s)
	I1028 11:38:02.967692   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:02.967705   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:02.968048   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:02.968068   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:02.968078   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:02.968087   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:02.968318   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:02.968348   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:02.968368   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:03.041715   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:03.041737   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:03.042099   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:03.042116   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:04.694021   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 11:38:04.694092   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:38:04.697524   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:04.698007   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:38:04.698035   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:04.698253   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:38:04.698452   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:38:04.698625   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:38:04.698772   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:38:05.081314   85575 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 11:38:05.146173   85575 addons.go:234] Setting addon gcp-auth=true in "addons-558164"
	I1028 11:38:05.146243   85575 host.go:66] Checking if "addons-558164" exists ...
	I1028 11:38:05.146663   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:38:05.146702   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:38:05.162751   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I1028 11:38:05.163268   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:38:05.163824   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:38:05.163847   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:38:05.164161   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:38:05.164645   85575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:38:05.164675   85575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:38:05.179777   85575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1028 11:38:05.180296   85575 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:38:05.180917   85575 main.go:141] libmachine: Using API Version  1
	I1028 11:38:05.180943   85575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:38:05.181310   85575 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:38:05.181568   85575 main.go:141] libmachine: (addons-558164) Calling .GetState
	I1028 11:38:05.183220   85575 main.go:141] libmachine: (addons-558164) Calling .DriverName
	I1028 11:38:05.183459   85575 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 11:38:05.183494   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHHostname
	I1028 11:38:05.186386   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:05.186788   85575 main.go:141] libmachine: (addons-558164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:cc:de", ip: ""} in network mk-addons-558164: {Iface:virbr1 ExpiryTime:2024-10-28 12:37:27 +0000 UTC Type:0 Mac:52:54:00:8d:cc:de Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-558164 Clientid:01:52:54:00:8d:cc:de}
	I1028 11:38:05.186819   85575 main.go:141] libmachine: (addons-558164) DBG | domain addons-558164 has defined IP address 192.168.39.31 and MAC address 52:54:00:8d:cc:de in network mk-addons-558164
	I1028 11:38:05.187220   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHPort
	I1028 11:38:05.187412   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHKeyPath
	I1028 11:38:05.187567   85575 main.go:141] libmachine: (addons-558164) Calling .GetSSHUsername
	I1028 11:38:05.187741   85575 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/addons-558164/id_rsa Username:docker}
	I1028 11:38:05.392022   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:05.598176   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.500297613s)
	I1028 11:38:05.598250   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598265   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598314   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.454767946s)
	I1028 11:38:05.598360   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598375   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598425   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.453396207s)
	I1028 11:38:05.598456   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598465   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598470   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.362070954s)
	I1028 11:38:05.598490   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598498   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598549   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.339197748s)
	I1028 11:38:05.598585   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598587   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.160370446s)
	I1028 11:38:05.598597   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598608   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598620   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598713   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.07873069s)
	I1028 11:38:05.598730   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598738   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598836   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.855105031s)
	I1028 11:38:05.598851   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598865   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.598937   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.598935   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.624527849s)
	I1028 11:38:05.598960   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.598967   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599015   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599021   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.215453687s)
	I1028 11:38:05.599029   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599030   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599038   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599046   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	W1028 11:38:05.599053   85575 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:38:05.599057   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599070   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599039   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599095   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599098   85575 retry.go:31] will retry after 195.291749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 11:38:05.599104   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599166   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599191   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599198   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599205   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599210   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599276   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599306   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599312   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599356   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599363   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599371   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599377   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.599424   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.599446   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.599452   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599459   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.599465   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.600502   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.600532   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.600539   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.599078   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.600749   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.600862   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.600888   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.600894   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.600901   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.600907   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.602186   85575 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-558164 service yakd-dashboard -n yakd-dashboard
	
	I1028 11:38:05.602482   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.602505   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.602530   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.602537   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.603458   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.603489   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.603495   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604082   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604098   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604108   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604125   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604134   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604145   85575 addons.go:475] Verifying addon metrics-server=true in "addons-558164"
	I1028 11:38:05.604336   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604379   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604390   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604400   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.604410   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.604503   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604518   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604528   85575 addons.go:475] Verifying addon ingress=true in "addons-558164"
	I1028 11:38:05.604715   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604886   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.604922   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.604934   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.604943   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.604951   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.605275   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.605290   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.605303   85575 addons.go:475] Verifying addon registry=true in "addons-558164"
	I1028 11:38:05.605555   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.605704   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.605590   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:05.606971   85575 out.go:177] * Verifying registry addon...
	I1028 11:38:05.607177   85575 out.go:177] * Verifying ingress addon...
	I1028 11:38:05.609239   85575 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 11:38:05.609433   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 11:38:05.621964   85575 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 11:38:05.621985   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:05.627377   85575 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 11:38:05.627400   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:05.636349   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:05.636368   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:05.636628   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:05.636650   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:05.795147   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 11:38:06.117016   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:06.117808   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:06.792609   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:06.792743   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.075814   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.62033457s)
	I1028 11:38:07.075871   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.075897   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.075912   85575 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.8924203s)
	I1028 11:38:07.076162   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.076206   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.076205   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:07.076221   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.076235   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.076471   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.076502   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.076515   85575 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-558164"
	I1028 11:38:07.077552   85575 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 11:38:07.077560   85575 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 11:38:07.079447   85575 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 11:38:07.080633   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 11:38:07.080655   85575 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 11:38:07.080666   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 11:38:07.108586   85575 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 11:38:07.108619   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:07.122207   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.122208   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:07.288223   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 11:38:07.288251   85575 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 11:38:07.392492   85575 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:38:07.392521   85575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 11:38:07.441492   85575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 11:38:07.587089   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:07.615315   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:07.616431   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:07.860433   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:07.906897   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.111691676s)
	I1028 11:38:07.906967   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.906992   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.907356   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:07.907375   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.907398   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:07.907414   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:07.907423   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:07.907676   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:07.907694   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.086020   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:08.113726   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:08.114282   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:08.591441   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:08.684556   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:08.687733   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:08.721732   85575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.280201986s)
	I1028 11:38:08.721790   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:08.721805   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:08.722065   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:08.722087   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.722097   85575 main.go:141] libmachine: Making call to close driver server
	I1028 11:38:08.722106   85575 main.go:141] libmachine: (addons-558164) Calling .Close
	I1028 11:38:08.722333   85575 main.go:141] libmachine: (addons-558164) DBG | Closing plugin on server side
	I1028 11:38:08.722381   85575 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:38:08.722399   85575 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:38:08.723317   85575 addons.go:475] Verifying addon gcp-auth=true in "addons-558164"
	I1028 11:38:08.724616   85575 out.go:177] * Verifying gcp-auth addon...
	I1028 11:38:08.726521   85575 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 11:38:08.737885   85575 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 11:38:08.737904   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:09.086612   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:09.114140   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:09.114448   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:09.230179   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:09.590166   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:09.614222   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:09.614261   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:09.729827   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:10.085472   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:10.113080   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:10.113431   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:10.230513   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:10.360509   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:10.585454   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:10.613084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:10.614336   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:10.730562   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:11.085756   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:11.113661   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:11.113867   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:11.230310   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:11.585062   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:11.616698   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:11.618902   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:11.730036   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.086110   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:12.113094   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:12.113237   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:12.230126   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.585348   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:12.613768   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:12.615338   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:12.730211   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:12.860309   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:13.085981   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:13.114249   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:13.114848   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:13.230844   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:13.587311   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:13.615144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:13.615274   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:13.729742   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:14.259622   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:14.266475   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:14.266841   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:14.268935   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:14.587446   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:14.613080   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:14.613164   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:14.729850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:15.085153   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:15.113050   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:15.113139   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:15.230047   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:15.360296   85575 pod_ready.go:103] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"False"
	I1028 11:38:15.585811   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:15.612666   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:15.613477   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:15.729459   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:16.084998   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:16.113403   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:16.115121   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:16.230626   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:16.586058   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:16.613183   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:16.613974   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:16.730181   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.086034   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:17.113140   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:17.114432   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:17.230398   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.360766   85575 pod_ready.go:93] pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.360789   85575 pod_ready.go:82] duration metric: took 16.506340866s for pod "amd-gpu-device-plugin-hf6nm" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.360798   85575 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.364703   85575 pod_ready.go:93] pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.364727   85575 pod_ready.go:82] duration metric: took 3.921896ms for pod "coredns-7c65d6cfc9-6tgvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.364740   85575 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.366215   85575 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mfdr7" not found
	I1028 11:38:17.366237   85575 pod_ready.go:82] duration metric: took 1.489435ms for pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace to be "Ready" ...
	E1028 11:38:17.366247   85575 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mfdr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mfdr7" not found
	I1028 11:38:17.366252   85575 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.369860   85575 pod_ready.go:93] pod "etcd-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.369879   85575 pod_ready.go:82] duration metric: took 3.620568ms for pod "etcd-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.369887   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.373437   85575 pod_ready.go:93] pod "kube-apiserver-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.373452   85575 pod_ready.go:82] duration metric: took 3.560184ms for pod "kube-apiserver-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.373460   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.558227   85575 pod_ready.go:93] pod "kube-controller-manager-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.558255   85575 pod_ready.go:82] duration metric: took 184.789051ms for pod "kube-controller-manager-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.558266   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pbrhz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.584747   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:17.613367   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:17.613912   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:17.732093   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:17.958474   85575 pod_ready.go:93] pod "kube-proxy-pbrhz" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:17.958500   85575 pod_ready.go:82] duration metric: took 400.227461ms for pod "kube-proxy-pbrhz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:17.958512   85575 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.086182   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:18.112638   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:18.113092   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:18.230593   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:18.359139   85575 pod_ready.go:93] pod "kube-scheduler-addons-558164" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:18.359174   85575 pod_ready.go:82] duration metric: took 400.654865ms for pod "kube-scheduler-addons-558164" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.359192   85575 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.584980   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:18.613259   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:18.613911   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:18.730719   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:18.759195   85575 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace has status "Ready":"True"
	I1028 11:38:18.759218   85575 pod_ready.go:82] duration metric: took 400.017509ms for pod "nvidia-device-plugin-daemonset-tmgxz" in "kube-system" namespace to be "Ready" ...
	I1028 11:38:18.759227   85575 pod_ready.go:39] duration metric: took 17.923448238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:38:18.759247   85575 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:38:18.759308   85575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:38:18.808892   85575 api_server.go:72] duration metric: took 21.284167287s to wait for apiserver process to appear ...
	I1028 11:38:18.808918   85575 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:38:18.808939   85575 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I1028 11:38:18.812999   85575 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I1028 11:38:18.813930   85575 api_server.go:141] control plane version: v1.31.2
	I1028 11:38:18.813951   85575 api_server.go:131] duration metric: took 5.02705ms to wait for apiserver health ...
	I1028 11:38:18.813959   85575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:38:18.967090   85575 system_pods.go:59] 18 kube-system pods found
	I1028 11:38:18.967123   85575 system_pods.go:61] "amd-gpu-device-plugin-hf6nm" [0741f17c-8923-4320-9291-a8c931291ac0] Running
	I1028 11:38:18.967129   85575 system_pods.go:61] "coredns-7c65d6cfc9-6tgvv" [3f418701-d48a-4380-a42c-d4facbdb4f25] Running
	I1028 11:38:18.967135   85575 system_pods.go:61] "csi-hostpath-attacher-0" [b72fb2c5-aba3-42da-8842-c7c82b4dc7d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 11:38:18.967141   85575 system_pods.go:61] "csi-hostpath-resizer-0" [fbb4ad73-884c-49ad-afce-83f9db13c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 11:38:18.967149   85575 system_pods.go:61] "csi-hostpathplugin-w9lwc" [a47fd224-db98-4ad2-b5d3-3c0215182531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 11:38:18.967156   85575 system_pods.go:61] "etcd-addons-558164" [9cc5084d-707b-43d9-b040-6fd37f3039d1] Running
	I1028 11:38:18.967162   85575 system_pods.go:61] "kube-apiserver-addons-558164" [64eab89a-fbf4-4a89-89a7-fe2d257b2c4a] Running
	I1028 11:38:18.967167   85575 system_pods.go:61] "kube-controller-manager-addons-558164" [fee40a2a-2feb-46d2-8d34-673155f16349] Running
	I1028 11:38:18.967176   85575 system_pods.go:61] "kube-ingress-dns-minikube" [4897117b-12e8-4427-823d-350b57c963e1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1028 11:38:18.967186   85575 system_pods.go:61] "kube-proxy-pbrhz" [1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa] Running
	I1028 11:38:18.967194   85575 system_pods.go:61] "kube-scheduler-addons-558164" [d18c9948-8ede-493d-b11d-548cd422d0a3] Running
	I1028 11:38:18.967200   85575 system_pods.go:61] "metrics-server-84c5f94fbc-xzgq8" [7cebd793-5c4b-4588-bba1-fdb19c5e4fe4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 11:38:18.967206   85575 system_pods.go:61] "nvidia-device-plugin-daemonset-tmgxz" [2222e84c-777d-4de9-a7d0-c0f8307c6df7] Running
	I1028 11:38:18.967213   85575 system_pods.go:61] "registry-66c9cd494c-knm9h" [ef5d7a78-4f98-44f2-8f1f-121ec2384ac3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1028 11:38:18.967220   85575 system_pods.go:61] "registry-proxy-6mfkq" [4c6c611d-0f32-46ff-b60d-db1ab8734769] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1028 11:38:18.967230   85575 system_pods.go:61] "snapshot-controller-56fcc65765-9492j" [69e9e3e8-53e2-4132-a09f-5f8ce0b786a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:18.967238   85575 system_pods.go:61] "snapshot-controller-56fcc65765-brfbf" [26209eed-8f71-4c6e-b5ec-7232a38b8ec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:18.967241   85575 system_pods.go:61] "storage-provisioner" [3918cbcc-ee3e-4c15-8d21-f576b50aec1d] Running
	I1028 11:38:18.967252   85575 system_pods.go:74] duration metric: took 153.286407ms to wait for pod list to return data ...
	I1028 11:38:18.967263   85575 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:38:19.085360   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:19.113464   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:19.114452   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:19.158362   85575 default_sa.go:45] found service account: "default"
	I1028 11:38:19.158385   85575 default_sa.go:55] duration metric: took 191.1116ms for default service account to be created ...
	I1028 11:38:19.158394   85575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:38:19.230583   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:19.365083   85575 system_pods.go:86] 18 kube-system pods found
	I1028 11:38:19.365114   85575 system_pods.go:89] "amd-gpu-device-plugin-hf6nm" [0741f17c-8923-4320-9291-a8c931291ac0] Running
	I1028 11:38:19.365121   85575 system_pods.go:89] "coredns-7c65d6cfc9-6tgvv" [3f418701-d48a-4380-a42c-d4facbdb4f25] Running
	I1028 11:38:19.365128   85575 system_pods.go:89] "csi-hostpath-attacher-0" [b72fb2c5-aba3-42da-8842-c7c82b4dc7d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 11:38:19.365135   85575 system_pods.go:89] "csi-hostpath-resizer-0" [fbb4ad73-884c-49ad-afce-83f9db13c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 11:38:19.365142   85575 system_pods.go:89] "csi-hostpathplugin-w9lwc" [a47fd224-db98-4ad2-b5d3-3c0215182531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 11:38:19.365146   85575 system_pods.go:89] "etcd-addons-558164" [9cc5084d-707b-43d9-b040-6fd37f3039d1] Running
	I1028 11:38:19.365151   85575 system_pods.go:89] "kube-apiserver-addons-558164" [64eab89a-fbf4-4a89-89a7-fe2d257b2c4a] Running
	I1028 11:38:19.365154   85575 system_pods.go:89] "kube-controller-manager-addons-558164" [fee40a2a-2feb-46d2-8d34-673155f16349] Running
	I1028 11:38:19.365162   85575 system_pods.go:89] "kube-ingress-dns-minikube" [4897117b-12e8-4427-823d-350b57c963e1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1028 11:38:19.365165   85575 system_pods.go:89] "kube-proxy-pbrhz" [1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa] Running
	I1028 11:38:19.365170   85575 system_pods.go:89] "kube-scheduler-addons-558164" [d18c9948-8ede-493d-b11d-548cd422d0a3] Running
	I1028 11:38:19.365175   85575 system_pods.go:89] "metrics-server-84c5f94fbc-xzgq8" [7cebd793-5c4b-4588-bba1-fdb19c5e4fe4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 11:38:19.365181   85575 system_pods.go:89] "nvidia-device-plugin-daemonset-tmgxz" [2222e84c-777d-4de9-a7d0-c0f8307c6df7] Running
	I1028 11:38:19.365186   85575 system_pods.go:89] "registry-66c9cd494c-knm9h" [ef5d7a78-4f98-44f2-8f1f-121ec2384ac3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1028 11:38:19.365194   85575 system_pods.go:89] "registry-proxy-6mfkq" [4c6c611d-0f32-46ff-b60d-db1ab8734769] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1028 11:38:19.365202   85575 system_pods.go:89] "snapshot-controller-56fcc65765-9492j" [69e9e3e8-53e2-4132-a09f-5f8ce0b786a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:19.365208   85575 system_pods.go:89] "snapshot-controller-56fcc65765-brfbf" [26209eed-8f71-4c6e-b5ec-7232a38b8ec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 11:38:19.365215   85575 system_pods.go:89] "storage-provisioner" [3918cbcc-ee3e-4c15-8d21-f576b50aec1d] Running
	I1028 11:38:19.365224   85575 system_pods.go:126] duration metric: took 206.823166ms to wait for k8s-apps to be running ...
	I1028 11:38:19.365232   85575 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:38:19.365277   85575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:38:19.393954   85575 system_svc.go:56] duration metric: took 28.710964ms WaitForService to wait for kubelet
	I1028 11:38:19.393981   85575 kubeadm.go:582] duration metric: took 21.869263514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:38:19.394001   85575 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:38:19.560372   85575 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:38:19.560400   85575 node_conditions.go:123] node cpu capacity is 2
	I1028 11:38:19.560414   85575 node_conditions.go:105] duration metric: took 166.408086ms to run NodePressure ...
	I1028 11:38:19.560427   85575 start.go:241] waiting for startup goroutines ...
	I1028 11:38:19.584849   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:19.613852   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:19.614237   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:19.729865   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:20.084689   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:20.113998   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:20.114935   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:20.231281   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:20.585533   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:20.613436   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:20.614423   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:20.730286   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:21.085246   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:21.112412   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:21.113224   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:21.229897   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:21.679304   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:21.679408   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:21.679905   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:21.777947   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:22.085180   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:22.113736   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:22.113870   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:22.230498   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:22.586406   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:22.613742   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:22.614445   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:22.729751   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:23.085546   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:23.112538   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:23.112783   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:23.229569   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:23.586050   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:23.612683   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:23.614748   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:23.730186   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:24.084743   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:24.113652   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:24.113834   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:24.230085   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:24.585318   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:24.613658   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:24.614091   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:24.730503   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:25.085173   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:25.113260   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:25.113901   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:25.229699   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:25.584826   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:25.615072   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:25.615085   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:25.729396   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:26.085388   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:26.113419   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:26.113629   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:26.230040   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:26.585576   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:26.613210   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:26.613885   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:26.730172   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:27.085069   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:27.114422   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:27.114524   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:27.229967   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:27.585568   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:27.613502   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:27.614635   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:27.730823   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:28.084609   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:28.113850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:28.114026   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:28.229339   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:28.585076   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:28.616112   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:28.618045   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:28.731113   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:29.085144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:29.114101   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:29.114228   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:29.230238   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:29.585850   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:29.613404   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:29.614225   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:29.729930   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:30.084713   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:30.113144   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:30.114070   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:30.230771   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:30.585684   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:30.613518   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:30.613900   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:30.730126   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:31.085936   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:31.113909   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:31.114351   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:31.229732   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:31.585822   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:31.614147   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:31.616194   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:31.730201   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:32.086291   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:32.113641   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:32.113763   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:32.230723   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:32.585705   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:32.614112   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:32.614999   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:32.729907   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:33.084886   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:33.114082   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:33.114174   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:33.229832   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:33.586791   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:33.614823   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:33.617546   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:33.730241   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:34.085554   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:34.118098   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:34.118220   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:34.229478   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:34.585604   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:34.614084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:34.614330   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:34.730175   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:35.085063   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:35.113633   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:35.113752   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:35.231151   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:35.585521   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:35.613489   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:35.613798   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:35.730464   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:36.086575   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:36.113459   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 11:38:36.114740   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:36.230299   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:36.585743   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:36.614032   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:36.615395   85575 kapi.go:107] duration metric: took 31.005961393s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 11:38:36.730081   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:37.085275   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:37.112747   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:37.230225   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:37.585637   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:37.613947   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:37.730260   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:38.085359   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:38.112753   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:38.230003   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:38.585834   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:38.613750   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:38.729799   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:39.085536   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:39.113673   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:39.229931   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:39.585184   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:39.614201   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:39.730223   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.085906   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:40.114866   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:40.229865   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.873892   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:40.874177   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:40.875180   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.086400   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.112913   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:41.230418   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:41.585576   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:41.613658   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:41.730797   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:42.085061   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:42.113856   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:42.230484   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:42.585296   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:42.613637   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:42.729839   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:43.084882   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:43.113931   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:43.230337   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:43.586064   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:43.613839   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:43.730687   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:44.085633   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:44.112960   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:44.230445   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:44.585776   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:44.613368   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:44.730496   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:45.085523   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:45.113188   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:45.230240   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:45.585390   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:45.612763   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:45.731143   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:46.086325   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:46.113765   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:46.229981   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:46.585075   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:46.613823   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:46.730384   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:47.085382   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:47.113342   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:47.230531   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:47.584448   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:47.614056   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:47.730749   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:48.085519   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:48.112985   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:48.230084   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:48.585932   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:48.613428   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:48.730432   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:49.088379   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:49.113764   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:49.230242   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:49.586377   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:49.613722   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:49.729549   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:50.089248   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:50.189467   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:50.230269   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:50.585824   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:50.613677   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:50.730308   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:51.086355   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:51.114785   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:51.232032   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:51.586014   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:51.617015   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:51.731910   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:52.086234   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:52.113016   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:52.230849   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:52.585554   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:52.613696   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:52.729540   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:53.085654   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:53.113788   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:53.230520   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:53.642690   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:53.644408   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:53.730695   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:54.087230   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:54.114213   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:54.230667   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:54.585251   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:54.613137   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:54.730460   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:55.085868   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:55.113191   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:55.229527   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:55.586438   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:55.616076   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:55.729939   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:56.084590   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:56.113232   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:56.230818   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:56.584595   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:56.613124   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:56.730337   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:57.088530   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:57.113752   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:57.230143   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:57.586664   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:57.613413   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:57.730866   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:58.084916   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:58.113406   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:58.232615   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:58.585760   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:58.613309   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:58.729580   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:59.098746   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:59.114996   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:59.231037   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:38:59.587067   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:38:59.613694   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:38:59.734859   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:00.273804   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:00.273955   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:00.274636   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:00.587002   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:00.613267   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:00.729293   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:01.085430   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:01.112737   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:01.230461   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:01.587603   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:01.615336   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:01.734928   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:02.086235   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:02.112469   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:02.230255   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:02.586244   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:02.614353   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:02.734786   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:03.085193   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:03.116857   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:03.230391   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:03.584785   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:03.613359   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:03.729703   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:04.086018   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:04.113473   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:04.230638   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:04.586103   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:04.613351   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:04.729728   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:05.085472   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 11:39:05.112483   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:05.230169   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:05.585544   85575 kapi.go:107] duration metric: took 58.504874375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 11:39:05.613323   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:05.737580   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:06.113251   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:06.230008   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:06.614493   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:06.729620   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:07.113584   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:07.230092   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:07.613279   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:07.729521   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:08.113416   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:08.229991   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:08.613406   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:08.729888   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:09.113742   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:09.229858   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:09.613518   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:09.732442   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:10.113704   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:10.231148   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:10.614166   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:10.731000   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:11.115419   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:11.229795   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:11.655748   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:11.871789   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:12.114029   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:12.230901   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:12.613419   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:12.730243   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:13.113459   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:13.229787   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:13.613438   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:13.730541   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:14.113822   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:14.232424   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:14.613639   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:14.730382   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:15.114248   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:15.230537   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:15.613599   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:15.729603   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:16.115397   85575 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 11:39:16.230753   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:16.615424   85575 kapi.go:107] duration metric: took 1m11.006180181s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 11:39:16.729986   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:17.233198   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:17.730667   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:18.230155   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:18.730481   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:19.229836   85575 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 11:39:19.730390   85575 kapi.go:107] duration metric: took 1m11.003864421s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 11:39:19.732115   85575 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-558164 cluster.
	I1028 11:39:19.733669   85575 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 11:39:19.734832   85575 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 11:39:19.736138   85575 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner-rancher, cloud-spanner, ingress-dns, yakd, storage-provisioner, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1028 11:39:19.737342   85575 addons.go:510] duration metric: took 1m22.212581031s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin storage-provisioner-rancher cloud-spanner ingress-dns yakd storage-provisioner metrics-server inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1028 11:39:19.737385   85575 start.go:246] waiting for cluster config update ...
	I1028 11:39:19.737404   85575 start.go:255] writing updated cluster config ...
	I1028 11:39:19.737663   85575 ssh_runner.go:195] Run: rm -f paused
	I1028 11:39:19.786000   85575 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:39:19.787416   85575 out.go:177] * Done! kubectl is now configured to use "addons-558164" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.126415299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115964126362899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46937d15-1d30-487f-be94-8c7515e8f9ca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.126887618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bef0c28-b3fa-4539-a36a-32ea4c90589e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.126942084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bef0c28-b3fa-4539-a36a-32ea4c90589e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.127185034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78eec0e42d18ddc32d761fd33b3a77dcef27035e9c383a4e415d1b6a9c6002e2,PodSandboxId:e59bfc246a344581f5616a1083ada35c28dfa995a8a207579a77f473db06793e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730115760465883261,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-vm8nt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95365b42-ab21-4c85-9c67-ac097572e19c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-9
9ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf116763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f641318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca32cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bef0c28-b3fa-4539-a36a-32ea4c90589e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.161527695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42848833-98d6-4ead-bba1-25bb61a7dc31 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.161608479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42848833-98d6-4ead-bba1-25bb61a7dc31 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.162660920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4982792d-6ec1-4326-a865-24a5c95903a4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.164206199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115964164177167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4982792d-6ec1-4326-a865-24a5c95903a4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.164769137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf8a3419-56dd-4f94-a613-73b606e3b55f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.164827461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf8a3419-56dd-4f94-a613-73b606e3b55f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.165091973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78eec0e42d18ddc32d761fd33b3a77dcef27035e9c383a4e415d1b6a9c6002e2,PodSandboxId:e59bfc246a344581f5616a1083ada35c28dfa995a8a207579a77f473db06793e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730115760465883261,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-vm8nt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95365b42-ab21-4c85-9c67-ac097572e19c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-9
9ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf116763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f641318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca32cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf8a3419-56dd-4f94-a613-73b606e3b55f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.202466809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a304374d-bc22-4e08-a2c7-b6fee232b14e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.202550474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a304374d-bc22-4e08-a2c7-b6fee232b14e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.203709647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a58c6167-c894-496d-a4ea-bd61ce186f94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.205136817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115964205111102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a58c6167-c894-496d-a4ea-bd61ce186f94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.205607091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb5f6e1d-fe73-4eeb-ae37-8f9a23dceb69 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.205667993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb5f6e1d-fe73-4eeb-ae37-8f9a23dceb69 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.206070556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78eec0e42d18ddc32d761fd33b3a77dcef27035e9c383a4e415d1b6a9c6002e2,PodSandboxId:e59bfc246a344581f5616a1083ada35c28dfa995a8a207579a77f473db06793e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730115760465883261,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-vm8nt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95365b42-ab21-4c85-9c67-ac097572e19c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-9
9ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf116763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f641318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca32cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb5f6e1d-fe73-4eeb-ae37-8f9a23dceb69 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.235694506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3c3b90b-5986-43c3-86c0-315863ae7374 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.235809077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3c3b90b-5986-43c3-86c0-315863ae7374 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.237894030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=314c7cbd-1768-4fe5-80bc-e06c00bbf043 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.239096270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115964239066206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=314c7cbd-1768-4fe5-80bc-e06c00bbf043 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.239621210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=167a907e-9f4c-47b4-95a2-a51a0fc5c540 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.239680245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=167a907e-9f4c-47b4-95a2-a51a0fc5c540 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:46:04 addons-558164 crio[664]: time="2024-10-28 11:46:04.240024908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78eec0e42d18ddc32d761fd33b3a77dcef27035e9c383a4e415d1b6a9c6002e2,PodSandboxId:e59bfc246a344581f5616a1083ada35c28dfa995a8a207579a77f473db06793e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730115760465883261,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-vm8nt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95365b42-ab21-4c85-9c67-ac097572e19c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfeaf60f156b95c9cc6cc4fb36a2034bdf717fccc4c15160cdc38c5f64e1e20,PodSandboxId:62c9b018b548ec0a4b1e32db07405303e0b23fbc0cab22a75552c3e15604bab8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730115620791715013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87e4099c-e1d5-4974-ab0b-e2de82c733dc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437168bf2c6579a0273d1930c564161f5ae1f6324a7807fc0fc95d21dd426c24,PodSandboxId:802b7b16db1daecfe97dcb009fbb99bfafe25fad4d1164e642f973212e96dc5e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730115562952423738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a0e12d7-e422-4b10-9
9ec-bb257d1f85e6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d8b4d6c1f7073875251d4946c91645ad56b307a820016b69636bf9bcb523,PodSandboxId:0c6207db6229ab4a683399a3393f113ed769c23ee079be5d0f113aea9b5f609a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730115512770513006,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-xzgq8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 7cebd793-5c4b-4588-bba1-fdb19c5e4fe4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30ed896c3ed46f352c560b818b3b9a3c82aba9aa3760b65db3cd07f7bfddf4c,PodSandboxId:72d8edbce8132f5fad725bd29505a1c04f232ce67f90b8603694ebf116763447,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730115496757840648,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hf6nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0741f17c-8923-4320-9291-a8c931291ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a,PodSandboxId:34d3ff3415a5f8f641318cc77bf9d723be7ec8a02464c17f8c3874fad6e03fe5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730115483650932683,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3918cbcc-ee3e-4c15-8d21-f576b50aec1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008,PodSandboxId:e24847365280f632b5088f9f6bca32cff535910885d3845731d79a052d5dd49b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730115481199658464,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6tgvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f418701-d48a-4380-a42c-d4facbdb4f25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff,PodSandboxId:5615740d67274f539462f387ac5d3d10c8df51e0a9541c7a5b0b2c2b42be39c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730115478720808010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pbrhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e86c592-ba6b-4296-b9b3-ae17ddc3e7fa,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe,PodSandboxId:5ea903e5aa2f88f5e68878ac079888a94469bb84ca42421d820c24349ddbf52e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730115467110224526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8047e8b95534bbec00a53f558ef7c4c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce,PodSandboxId:7d82d01a9fd32a81a1810da6e8da69cf2187a8de18bb331869202cb1ea948c79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1730115467108182575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66eeb009db5029dbece0b93578f79650,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf,PodSandboxId:c361b789c8f62deb5b48c72348b05899cc402826139c55dd303778013de37fe9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1730115467007701596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70fd3cb4be994cb07237df5d146546a7,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71,PodSandboxId:40ebadee68961449fe26689458699f3019f72125128cb562fe87cfc8b2156f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1730115466964282959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-558164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3405a9a3ebfeb38a3ad51ba8a29648da,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=167a907e-9f4c-47b4-95a2-a51a0fc5c540 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78eec0e42d18d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   e59bfc246a344       hello-world-app-55bf9c44b4-vm8nt
	fdfeaf60f156b       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   62c9b018b548e       nginx
	437168bf2c657       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   802b7b16db1da       busybox
	6181d8b4d6c1f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   0c6207db6229a       metrics-server-84c5f94fbc-xzgq8
	e30ed896c3ed4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   72d8edbce8132       amd-gpu-device-plugin-hf6nm
	b84352c4fbc20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   34d3ff3415a5f       storage-provisioner
	f39e4e545f8d3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   e24847365280f       coredns-7c65d6cfc9-6tgvv
	53485b60a86d8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   5615740d67274       kube-proxy-pbrhz
	2f17b035df516       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   5ea903e5aa2f8       etcd-addons-558164
	c04614c73051f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   7d82d01a9fd32       kube-apiserver-addons-558164
	942b5fe351350       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   c361b789c8f62       kube-scheduler-addons-558164
	b449d05a1cada       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   40ebadee68961       kube-controller-manager-addons-558164
	
	
	==> coredns [f39e4e545f8d3a64251b2c2f3c83a31bd886c0e06c5db134358795bf12e01008] <==
	[INFO] 10.244.0.22:54017 - 34050 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077765s
	[INFO] 10.244.0.22:60764 - 65111 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000940716s
	[INFO] 10.244.0.22:54017 - 55562 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066998s
	[INFO] 10.244.0.22:60764 - 24549 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074145s
	[INFO] 10.244.0.22:60764 - 18888 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000723s
	[INFO] 10.244.0.22:60764 - 56401 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000252308s
	[INFO] 10.244.0.22:54017 - 59973 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091493s
	[INFO] 10.244.0.22:54017 - 53463 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007353s
	[INFO] 10.244.0.22:54017 - 34992 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055819s
	[INFO] 10.244.0.22:54017 - 52332 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096356s
	[INFO] 10.244.0.22:54017 - 7704 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066641s
	[INFO] 10.244.0.22:34239 - 61231 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000130398s
	[INFO] 10.244.0.22:50595 - 9630 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075713s
	[INFO] 10.244.0.22:34239 - 45086 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067154s
	[INFO] 10.244.0.22:34239 - 35974 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065344s
	[INFO] 10.244.0.22:50595 - 35888 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006987s
	[INFO] 10.244.0.22:34239 - 19074 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073143s
	[INFO] 10.244.0.22:34239 - 45249 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000188722s
	[INFO] 10.244.0.22:34239 - 28809 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000130693s
	[INFO] 10.244.0.22:50595 - 21644 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008183s
	[INFO] 10.244.0.22:50595 - 47045 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000081775s
	[INFO] 10.244.0.22:34239 - 7162 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059437s
	[INFO] 10.244.0.22:50595 - 30213 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069103s
	[INFO] 10.244.0.22:50595 - 63528 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092085s
	[INFO] 10.244.0.22:50595 - 11137 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080186s
	
	
	==> describe nodes <==
	Name:               addons-558164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-558164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=addons-558164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_37_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-558164
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:37:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-558164
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:46:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:42:59 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:42:59 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:42:59 +0000   Mon, 28 Oct 2024 11:37:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:42:59 +0000   Mon, 28 Oct 2024 11:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    addons-558164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 de5182324cb444329f5f4628a3a73a2c
	  System UUID:                de518232-4cb4-4432-9f5f-4628a3a73a2c
	  Boot ID:                    1a41ae33-de4f-4d75-8f59-2e9cade0ce3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  default                     hello-world-app-55bf9c44b4-vm8nt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 amd-gpu-device-plugin-hf6nm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 coredns-7c65d6cfc9-6tgvv                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m7s
	  kube-system                 etcd-addons-558164                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m12s
	  kube-system                 kube-apiserver-addons-558164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-controller-manager-addons-558164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-proxy-pbrhz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-scheduler-addons-558164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 metrics-server-84c5f94fbc-xzgq8          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m3s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m4s   kube-proxy       
	  Normal  Starting                 8m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m12s  kubelet          Node addons-558164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s  kubelet          Node addons-558164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s  kubelet          Node addons-558164 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m11s  kubelet          Node addons-558164 status is now: NodeReady
	  Normal  RegisteredNode           8m8s   node-controller  Node addons-558164 event: Registered Node addons-558164 in Controller
	
	
	==> dmesg <==
	[  +0.075285] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.063590] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.230213] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[Oct28 11:38] kauditd_printk_skb: 134 callbacks suppressed
	[  +5.030015] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.036925] kauditd_printk_skb: 63 callbacks suppressed
	[  +9.307677] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.384346] kauditd_printk_skb: 9 callbacks suppressed
	[ +12.130528] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.307274] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.433562] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:39] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.174218] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.123349] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.156058] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.090812] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.034238] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.390912] kauditd_printk_skb: 44 callbacks suppressed
	[Oct28 11:40] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.107545] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.011790] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.954368] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.949438] kauditd_printk_skb: 7 callbacks suppressed
	[Oct28 11:42] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.794250] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [2f17b035df516c80671409eb73c14d4f0d9f1b65176a02d69e6080d3cefad3fe] <==
	{"level":"info","ts":"2024-10-28T11:39:11.845556Z","caller":"traceutil/trace.go:171","msg":"trace[953549410] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1123; }","duration":"198.440505ms","start":"2024-10-28T11:39:11.647103Z","end":"2024-10-28T11:39:11.845543Z","steps":["trace[953549410] 'read index received'  (duration: 194.570432ms)","trace[953549410] 'applied index is now lower than readState.Index'  (duration: 3.869336ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:39:11.845643Z","caller":"traceutil/trace.go:171","msg":"trace[2012828142] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"209.815285ms","start":"2024-10-28T11:39:11.635822Z","end":"2024-10-28T11:39:11.845637Z","steps":["trace[2012828142] 'process raft request'  (duration: 209.447548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:39:11.845788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.669298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:39:11.845808Z","caller":"traceutil/trace.go:171","msg":"trace[1317541856] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1097; }","duration":"198.703892ms","start":"2024-10-28T11:39:11.647099Z","end":"2024-10-28T11:39:11.845803Z","steps":["trace[1317541856] 'agreement among raft nodes before linearized reading'  (duration: 198.655107ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:39:11.847156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.863256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:39:11.847196Z","caller":"traceutil/trace.go:171","msg":"trace[866272014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"140.908025ms","start":"2024-10-28T11:39:11.706281Z","end":"2024-10-28T11:39:11.847189Z","steps":["trace[866272014] 'agreement among raft nodes before linearized reading'  (duration: 140.810248ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:40:17.353983Z","caller":"traceutil/trace.go:171","msg":"trace[755972929] transaction","detail":"{read_only:false; response_revision:1573; number_of_response:1; }","duration":"380.322012ms","start":"2024-10-28T11:40:16.973628Z","end":"2024-10-28T11:40:17.353950Z","steps":["trace[755972929] 'process raft request'  (duration: 380.205502ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.354310Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:16.973610Z","time spent":"380.539386ms","remote":"127.0.0.1:48170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1539 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T11:40:17.354858Z","caller":"traceutil/trace.go:171","msg":"trace[235816913] linearizableReadLoop","detail":"{readStateIndex:1622; appliedIndex:1622; }","duration":"312.126916ms","start":"2024-10-28T11:40:17.042695Z","end":"2024-10-28T11:40:17.354822Z","steps":["trace[235816913] 'read index received'  (duration: 312.123798ms)","trace[235816913] 'applied index is now lower than readState.Index'  (duration: 2.614µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:40:17.354945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.23341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.354977Z","caller":"traceutil/trace.go:171","msg":"trace[698129893] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1573; }","duration":"312.275722ms","start":"2024-10-28T11:40:17.042691Z","end":"2024-10-28T11:40:17.354967Z","steps":["trace[698129893] 'agreement among raft nodes before linearized reading'  (duration: 312.216455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.354999Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:17.042657Z","time spent":"312.337392ms","remote":"127.0.0.1:48414","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":28,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T11:40:17.394897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.075996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.394955Z","caller":"traceutil/trace.go:171","msg":"trace[911482696] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1574; }","duration":"348.144623ms","start":"2024-10-28T11:40:17.046800Z","end":"2024-10-28T11:40:17.394944Z","steps":["trace[911482696] 'agreement among raft nodes before linearized reading'  (duration: 348.02911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.394983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T11:40:17.046770Z","time spent":"348.207109ms","remote":"127.0.0.1:47912","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T11:40:17.395287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.350717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395330Z","caller":"traceutil/trace.go:171","msg":"trace[1890499628] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1574; }","duration":"106.389502ms","start":"2024-10-28T11:40:17.288928Z","end":"2024-10-28T11:40:17.395317Z","steps":["trace[1890499628] 'agreement among raft nodes before linearized reading'  (duration: 106.339203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.834672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-28T11:40:17.395494Z","caller":"traceutil/trace.go:171","msg":"trace[1130799877] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1574; }","duration":"110.867059ms","start":"2024-10-28T11:40:17.284621Z","end":"2024-10-28T11:40:17.395488Z","steps":["trace[1130799877] 'agreement among raft nodes before linearized reading'  (duration: 110.781381ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.039333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395636Z","caller":"traceutil/trace.go:171","msg":"trace[133035816] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:1574; }","duration":"215.062886ms","start":"2024-10-28T11:40:17.180569Z","end":"2024-10-28T11:40:17.395632Z","steps":["trace[133035816] 'agreement among raft nodes before linearized reading'  (duration: 215.029517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:17.395783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.85731ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:17.395814Z","caller":"traceutil/trace.go:171","msg":"trace[446218127] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1574; }","duration":"277.887349ms","start":"2024-10-28T11:40:17.117919Z","end":"2024-10-28T11:40:17.395806Z","steps":["trace[446218127] 'agreement among raft nodes before linearized reading'  (duration: 277.848974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:40:48.898864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.184748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-health-monitor-controller-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:40:48.898923Z","caller":"traceutil/trace.go:171","msg":"trace[610907122] range","detail":"{range_begin:/registry/roles/kube-system/external-health-monitor-controller-cfg; range_end:; response_count:0; response_revision:1791; }","duration":"143.280037ms","start":"2024-10-28T11:40:48.755630Z","end":"2024-10-28T11:40:48.898911Z","steps":["trace[610907122] 'range keys from in-memory index tree'  (duration: 143.097356ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:46:04 up 8 min,  0 users,  load average: 0.38, 0.51, 0.37
	Linux addons-558164 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c04614c73051f23e19a0cd7d701cac146d67da4d2a52080aba89cb604d69b9ce] <==
	 > logger="UnhandledError"
	E1028 11:39:42.144070       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.14:443: connect: connection refused" logger="UnhandledError"
	E1028 11:39:42.146581       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.14:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.14:443: connect: connection refused" logger="UnhandledError"
	I1028 11:39:42.186647       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1028 11:39:43.539170       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.254.124"}
	I1028 11:40:07.833472       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 11:40:08.959934       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 11:40:13.348330       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 11:40:13.548451       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.190.75"}
	E1028 11:40:17.396882       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 11:40:24.848706       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 11:40:46.323558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.323595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.345910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.345941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.367613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.367664       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.370051       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.370157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 11:40:46.423396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 11:40:46.423791       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 11:40:47.369077       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 11:40:47.424984       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 11:40:47.537860       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:42:38.078579       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.92.224"}
	
	
	==> kube-controller-manager [b449d05a1cadae8f9c712ab9d8b841c38231dea63911dc13410458b2e8fdca71] <==
	E1028 11:43:57.846191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:43:59.337424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:43:59.337484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:14.589220       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:14.589332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:15.374413       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:15.374469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:32.988533       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:32.988574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:48.861335       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:48.861389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:55.688831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:55.688938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:44:55.867403       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:44:55.867447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:45:27.019963       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:45:27.020022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:45:27.130776       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:45:27.130820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:45:40.390528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:45:40.390670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:45:52.813870       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:45:52.813994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:46:02.416865       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:46:02.416976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [53485b60a86d8043349a5ee407c1203813ded9b401770390e9e6f0cf8d66deff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:37:59.431701       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:37:59.458408       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.31"]
	E1028 11:37:59.458497       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:37:59.614462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:37:59.614507       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:37:59.614540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:37:59.623855       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:37:59.624428       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:37:59.624468       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:37:59.630875       1 config.go:199] "Starting service config controller"
	I1028 11:37:59.630889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:37:59.630909       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:37:59.630912       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:37:59.632694       1 config.go:328] "Starting node config controller"
	I1028 11:37:59.632755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:37:59.731909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:37:59.731967       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:37:59.733338       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [942b5fe351350c9b1268ee79dcbfa84076a05d6745ed14a1aac806eeffa487cf] <==
	E1028 11:37:49.953659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:49.953690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:37:49.953714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1028 11:37:49.953592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.774910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 11:37:50.775039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.807361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:37:50.808072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.809333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:37:50.809382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:50.827901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:50.828119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.032659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:37:51.032782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.057803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:37:51.057852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.071503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:51.071547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.142409       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:37:51.143365       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 11:37:51.164072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:37:51.164125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:37:51.177256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:37:51.177473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:37:53.848036       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:44:52 addons-558164 kubelet[1218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:44:52 addons-558164 kubelet[1218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:44:52 addons-558164 kubelet[1218]: E1028 11:44:52.814123    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115892813809793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:44:52 addons-558164 kubelet[1218]: E1028 11:44:52.814146    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115892813809793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:01 addons-558164 kubelet[1218]: I1028 11:45:01.514441    1218 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hf6nm" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:45:02 addons-558164 kubelet[1218]: E1028 11:45:02.817395    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115902817064340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:02 addons-558164 kubelet[1218]: E1028 11:45:02.817440    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115902817064340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:12 addons-558164 kubelet[1218]: E1028 11:45:12.820211    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115912819902201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:12 addons-558164 kubelet[1218]: E1028 11:45:12.820250    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115912819902201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:22 addons-558164 kubelet[1218]: E1028 11:45:22.822526    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115922822270936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:22 addons-558164 kubelet[1218]: E1028 11:45:22.822853    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115922822270936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:32 addons-558164 kubelet[1218]: E1028 11:45:32.825222    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115932824880953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:32 addons-558164 kubelet[1218]: E1028 11:45:32.825474    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115932824880953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:42 addons-558164 kubelet[1218]: E1028 11:45:42.827693    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115942827402331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:42 addons-558164 kubelet[1218]: E1028 11:45:42.827770    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115942827402331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:52 addons-558164 kubelet[1218]: E1028 11:45:52.528246    1218 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:45:52 addons-558164 kubelet[1218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:45:52 addons-558164 kubelet[1218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:45:52 addons-558164 kubelet[1218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:45:52 addons-558164 kubelet[1218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:45:52 addons-558164 kubelet[1218]: E1028 11:45:52.830759    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115952830419538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:52 addons-558164 kubelet[1218]: E1028 11:45:52.830829    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115952830419538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:45:53 addons-558164 kubelet[1218]: I1028 11:45:53.514852    1218 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:46:02 addons-558164 kubelet[1218]: E1028 11:46:02.834369    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115962833584310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:46:02 addons-558164 kubelet[1218]: E1028 11:46:02.834405    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730115962833584310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b84352c4fbc2044df1add97112edb3ca1381e6340594430d7d49fbebbf05f57a] <==
	I1028 11:38:04.083258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:38:04.100891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:38:04.100982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:38:04.120071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:38:04.120343       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe!
	I1028 11:38:04.122265       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e17b3df1-79ea-4484-bd28-570c3e2acea3", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe became leader
	I1028 11:38:04.220935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-558164_93a55497-89d5-4390-8a87-18ee76d0a8fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-558164 -n addons-558164
helpers_test.go:261: (dbg) Run:  kubectl --context addons-558164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (366.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-558164
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-558164: exit status 82 (2m0.463050024s)

                                                
                                                
-- stdout --
	* Stopping node "addons-558164"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-558164" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-558164
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-558164: exit status 11 (21.70203597s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.31:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-558164" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-558164
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-558164: exit status 11 (6.142859912s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.31:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-558164" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-558164
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-558164: exit status 11 (6.143896944s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.31:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-558164" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 node stop m02 -v=7 --alsologtostderr
E1028 11:57:23.701525   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:33.943571   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:54.425242   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:58:35.387473   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:59:20.375842   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-273199 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.462995971s)

                                                
                                                
-- stdout --
	* Stopping node "ha-273199-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:57:22.357382   99170 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:57:22.357512   99170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:57:22.357521   99170 out.go:358] Setting ErrFile to fd 2...
	I1028 11:57:22.357526   99170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:57:22.357689   99170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:57:22.357914   99170 mustload.go:65] Loading cluster: ha-273199
	I1028 11:57:22.358290   99170 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:57:22.358305   99170 stop.go:39] StopHost: ha-273199-m02
	I1028 11:57:22.358671   99170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:22.358725   99170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:22.373901   99170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I1028 11:57:22.374314   99170 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:22.374960   99170 main.go:141] libmachine: Using API Version  1
	I1028 11:57:22.374984   99170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:22.375314   99170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:22.377744   99170 out.go:177] * Stopping node "ha-273199-m02"  ...
	I1028 11:57:22.379095   99170 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 11:57:22.379118   99170 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:57:22.379324   99170 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 11:57:22.379354   99170 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:57:22.382278   99170 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:57:22.382666   99170 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:57:22.382693   99170 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:57:22.382810   99170 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:57:22.382963   99170 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:57:22.383090   99170 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:57:22.383216   99170 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:57:22.470472   99170 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 11:57:22.525550   99170 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 11:57:22.580755   99170 main.go:141] libmachine: Stopping "ha-273199-m02"...
	I1028 11:57:22.580786   99170 main.go:141] libmachine: (ha-273199-m02) Calling .GetState
	I1028 11:57:22.582444   99170 main.go:141] libmachine: (ha-273199-m02) Calling .Stop
	I1028 11:57:22.585777   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 0/120
	I1028 11:57:23.587184   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 1/120
	I1028 11:57:24.588380   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 2/120
	I1028 11:57:25.590222   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 3/120
	I1028 11:57:26.592029   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 4/120
	I1028 11:57:27.593904   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 5/120
	I1028 11:57:28.595189   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 6/120
	I1028 11:57:29.596538   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 7/120
	I1028 11:57:30.598023   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 8/120
	I1028 11:57:31.599364   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 9/120
	I1028 11:57:32.601659   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 10/120
	I1028 11:57:33.602920   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 11/120
	I1028 11:57:34.604303   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 12/120
	I1028 11:57:35.605741   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 13/120
	I1028 11:57:36.607170   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 14/120
	I1028 11:57:37.608942   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 15/120
	I1028 11:57:38.610303   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 16/120
	I1028 11:57:39.611832   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 17/120
	I1028 11:57:40.614354   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 18/120
	I1028 11:57:41.615734   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 19/120
	I1028 11:57:42.618027   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 20/120
	I1028 11:57:43.619350   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 21/120
	I1028 11:57:44.620632   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 22/120
	I1028 11:57:45.622872   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 23/120
	I1028 11:57:46.624148   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 24/120
	I1028 11:57:47.626883   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 25/120
	I1028 11:57:48.628330   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 26/120
	I1028 11:57:49.630003   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 27/120
	I1028 11:57:50.631217   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 28/120
	I1028 11:57:51.632828   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 29/120
	I1028 11:57:52.634842   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 30/120
	I1028 11:57:53.636223   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 31/120
	I1028 11:57:54.637995   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 32/120
	I1028 11:57:55.639465   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 33/120
	I1028 11:57:56.640886   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 34/120
	I1028 11:57:57.642724   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 35/120
	I1028 11:57:58.644109   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 36/120
	I1028 11:57:59.645392   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 37/120
	I1028 11:58:00.647642   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 38/120
	I1028 11:58:01.649034   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 39/120
	I1028 11:58:02.650930   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 40/120
	I1028 11:58:03.652170   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 41/120
	I1028 11:58:04.653436   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 42/120
	I1028 11:58:05.654685   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 43/120
	I1028 11:58:06.656035   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 44/120
	I1028 11:58:07.658050   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 45/120
	I1028 11:58:08.659389   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 46/120
	I1028 11:58:09.661025   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 47/120
	I1028 11:58:10.662416   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 48/120
	I1028 11:58:11.663831   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 49/120
	I1028 11:58:12.666078   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 50/120
	I1028 11:58:13.667520   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 51/120
	I1028 11:58:14.669095   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 52/120
	I1028 11:58:15.670434   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 53/120
	I1028 11:58:16.671793   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 54/120
	I1028 11:58:17.673822   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 55/120
	I1028 11:58:18.675022   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 56/120
	I1028 11:58:19.676477   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 57/120
	I1028 11:58:20.678637   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 58/120
	I1028 11:58:21.680145   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 59/120
	I1028 11:58:22.682275   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 60/120
	I1028 11:58:23.683446   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 61/120
	I1028 11:58:24.684621   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 62/120
	I1028 11:58:25.685709   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 63/120
	I1028 11:58:26.686898   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 64/120
	I1028 11:58:27.688837   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 65/120
	I1028 11:58:28.691006   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 66/120
	I1028 11:58:29.692285   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 67/120
	I1028 11:58:30.694046   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 68/120
	I1028 11:58:31.695240   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 69/120
	I1028 11:58:32.696965   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 70/120
	I1028 11:58:33.698221   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 71/120
	I1028 11:58:34.699671   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 72/120
	I1028 11:58:35.701692   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 73/120
	I1028 11:58:36.702919   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 74/120
	I1028 11:58:37.704827   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 75/120
	I1028 11:58:38.706290   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 76/120
	I1028 11:58:39.707509   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 77/120
	I1028 11:58:40.708853   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 78/120
	I1028 11:58:41.710318   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 79/120
	I1028 11:58:42.711839   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 80/120
	I1028 11:58:43.713059   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 81/120
	I1028 11:58:44.715431   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 82/120
	I1028 11:58:45.716690   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 83/120
	I1028 11:58:46.718054   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 84/120
	I1028 11:58:47.719767   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 85/120
	I1028 11:58:48.721003   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 86/120
	I1028 11:58:49.722397   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 87/120
	I1028 11:58:50.723562   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 88/120
	I1028 11:58:51.725049   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 89/120
	I1028 11:58:52.726877   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 90/120
	I1028 11:58:53.728263   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 91/120
	I1028 11:58:54.730129   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 92/120
	I1028 11:58:55.731430   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 93/120
	I1028 11:58:56.732739   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 94/120
	I1028 11:58:57.734092   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 95/120
	I1028 11:58:58.735131   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 96/120
	I1028 11:58:59.736638   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 97/120
	I1028 11:59:00.738261   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 98/120
	I1028 11:59:01.739435   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 99/120
	I1028 11:59:02.741399   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 100/120
	I1028 11:59:03.743009   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 101/120
	I1028 11:59:04.744433   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 102/120
	I1028 11:59:05.746166   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 103/120
	I1028 11:59:06.747618   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 104/120
	I1028 11:59:07.749203   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 105/120
	I1028 11:59:08.750692   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 106/120
	I1028 11:59:09.752060   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 107/120
	I1028 11:59:10.753465   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 108/120
	I1028 11:59:11.754869   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 109/120
	I1028 11:59:12.756503   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 110/120
	I1028 11:59:13.758484   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 111/120
	I1028 11:59:14.759800   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 112/120
	I1028 11:59:15.761991   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 113/120
	I1028 11:59:16.763507   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 114/120
	I1028 11:59:17.765479   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 115/120
	I1028 11:59:18.766863   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 116/120
	I1028 11:59:19.768216   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 117/120
	I1028 11:59:20.769562   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 118/120
	I1028 11:59:21.770740   99170 main.go:141] libmachine: (ha-273199-m02) Waiting for machine to stop 119/120
	I1028 11:59:22.772130   99170 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 11:59:22.772250   99170 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-273199 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr: (18.731241338s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.313517781s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m03_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:52:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:52:57.905238   95151 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:57.905348   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905358   95151 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:57.905363   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905525   95151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:57.906087   95151 out.go:352] Setting JSON to false
	I1028 11:52:57.907021   95151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5728,"bootTime":1730110650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:57.907126   95151 start.go:139] virtualization: kvm guest
	I1028 11:52:57.909586   95151 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:57.911228   95151 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:57.911224   95151 notify.go:220] Checking for updates...
	I1028 11:52:57.912881   95151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:57.914463   95151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:57.915977   95151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:57.917406   95151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:57.918858   95151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:57.920382   95151 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:57.956004   95151 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:52:57.957439   95151 start.go:297] selected driver: kvm2
	I1028 11:52:57.957454   95151 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:52:57.957467   95151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:57.958216   95151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.958309   95151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:52:57.973197   95151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:52:57.973244   95151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:52:57.973498   95151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:52:57.973536   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:52:57.973597   95151 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:52:57.973608   95151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:52:57.973673   95151 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:57.973775   95151 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.975793   95151 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 11:52:57.977410   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:52:57.977445   95151 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:52:57.977454   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:52:57.977554   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:52:57.977568   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:52:57.977888   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:52:57.977914   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json: {Name:mk29535b2b544db75ec78b7c2f3618df28a4affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:52:57.978059   95151 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:52:57.978100   95151 start.go:364] duration metric: took 24.255µs to acquireMachinesLock for "ha-273199"
	I1028 11:52:57.978122   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:52:57.978188   95151 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:52:57.980939   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:52:57.981099   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:57.981147   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:57.995094   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I1028 11:52:57.995525   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:57.996093   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:52:57.996110   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:57.996513   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:57.996734   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:52:57.996948   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:52:57.997198   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:52:57.997236   95151 client.go:168] LocalClient.Create starting
	I1028 11:52:57.997293   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:52:57.997346   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997371   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997456   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:52:57.997488   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997509   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997543   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:52:57.997564   95151 main.go:141] libmachine: (ha-273199) Calling .PreCreateCheck
	I1028 11:52:57.998077   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:52:57.998575   95151 main.go:141] libmachine: Creating machine...
	I1028 11:52:57.998591   95151 main.go:141] libmachine: (ha-273199) Calling .Create
	I1028 11:52:57.998762   95151 main.go:141] libmachine: (ha-273199) Creating KVM machine...
	I1028 11:52:58.000213   95151 main.go:141] libmachine: (ha-273199) DBG | found existing default KVM network
	I1028 11:52:58.000923   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.000765   95174 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045e0}
	I1028 11:52:58.000944   95151 main.go:141] libmachine: (ha-273199) DBG | created network xml: 
	I1028 11:52:58.000958   95151 main.go:141] libmachine: (ha-273199) DBG | <network>
	I1028 11:52:58.000965   95151 main.go:141] libmachine: (ha-273199) DBG |   <name>mk-ha-273199</name>
	I1028 11:52:58.000975   95151 main.go:141] libmachine: (ha-273199) DBG |   <dns enable='no'/>
	I1028 11:52:58.000981   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.000999   95151 main.go:141] libmachine: (ha-273199) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:52:58.001012   95151 main.go:141] libmachine: (ha-273199) DBG |     <dhcp>
	I1028 11:52:58.001028   95151 main.go:141] libmachine: (ha-273199) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:52:58.001044   95151 main.go:141] libmachine: (ha-273199) DBG |     </dhcp>
	I1028 11:52:58.001076   95151 main.go:141] libmachine: (ha-273199) DBG |   </ip>
	I1028 11:52:58.001096   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.001107   95151 main.go:141] libmachine: (ha-273199) DBG | </network>
	I1028 11:52:58.001116   95151 main.go:141] libmachine: (ha-273199) DBG | 
	I1028 11:52:58.006306   95151 main.go:141] libmachine: (ha-273199) DBG | trying to create private KVM network mk-ha-273199 192.168.39.0/24...
	I1028 11:52:58.068689   95151 main.go:141] libmachine: (ha-273199) DBG | private KVM network mk-ha-273199 192.168.39.0/24 created
	I1028 11:52:58.068733   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.068675   95174 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.068745   95151 main.go:141] libmachine: (ha-273199) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.068764   95151 main.go:141] libmachine: (ha-273199) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:52:58.068841   95151 main.go:141] libmachine: (ha-273199) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:52:58.350673   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.350525   95174 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa...
	I1028 11:52:58.570859   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570715   95174 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk...
	I1028 11:52:58.570893   95151 main.go:141] libmachine: (ha-273199) DBG | Writing magic tar header
	I1028 11:52:58.570902   95151 main.go:141] libmachine: (ha-273199) DBG | Writing SSH key tar header
	I1028 11:52:58.570910   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570831   95174 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.570926   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199
	I1028 11:52:58.570998   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 (perms=drwx------)
	I1028 11:52:58.571026   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:52:58.571056   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:52:58.571074   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.571082   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:52:58.571094   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:52:58.571102   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:52:58.571107   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home
	I1028 11:52:58.571113   95151 main.go:141] libmachine: (ha-273199) DBG | Skipping /home - not owner
	I1028 11:52:58.571126   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:52:58.571143   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:52:58.571178   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:52:58.571193   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:52:58.571219   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:58.572260   95151 main.go:141] libmachine: (ha-273199) define libvirt domain using xml: 
	I1028 11:52:58.572286   95151 main.go:141] libmachine: (ha-273199) <domain type='kvm'>
	I1028 11:52:58.572294   95151 main.go:141] libmachine: (ha-273199)   <name>ha-273199</name>
	I1028 11:52:58.572299   95151 main.go:141] libmachine: (ha-273199)   <memory unit='MiB'>2200</memory>
	I1028 11:52:58.572304   95151 main.go:141] libmachine: (ha-273199)   <vcpu>2</vcpu>
	I1028 11:52:58.572308   95151 main.go:141] libmachine: (ha-273199)   <features>
	I1028 11:52:58.572313   95151 main.go:141] libmachine: (ha-273199)     <acpi/>
	I1028 11:52:58.572324   95151 main.go:141] libmachine: (ha-273199)     <apic/>
	I1028 11:52:58.572330   95151 main.go:141] libmachine: (ha-273199)     <pae/>
	I1028 11:52:58.572339   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572346   95151 main.go:141] libmachine: (ha-273199)   </features>
	I1028 11:52:58.572356   95151 main.go:141] libmachine: (ha-273199)   <cpu mode='host-passthrough'>
	I1028 11:52:58.572364   95151 main.go:141] libmachine: (ha-273199)   
	I1028 11:52:58.572375   95151 main.go:141] libmachine: (ha-273199)   </cpu>
	I1028 11:52:58.572382   95151 main.go:141] libmachine: (ha-273199)   <os>
	I1028 11:52:58.572393   95151 main.go:141] libmachine: (ha-273199)     <type>hvm</type>
	I1028 11:52:58.572409   95151 main.go:141] libmachine: (ha-273199)     <boot dev='cdrom'/>
	I1028 11:52:58.572428   95151 main.go:141] libmachine: (ha-273199)     <boot dev='hd'/>
	I1028 11:52:58.572442   95151 main.go:141] libmachine: (ha-273199)     <bootmenu enable='no'/>
	I1028 11:52:58.572452   95151 main.go:141] libmachine: (ha-273199)   </os>
	I1028 11:52:58.572462   95151 main.go:141] libmachine: (ha-273199)   <devices>
	I1028 11:52:58.572470   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='cdrom'>
	I1028 11:52:58.572481   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/boot2docker.iso'/>
	I1028 11:52:58.572489   95151 main.go:141] libmachine: (ha-273199)       <target dev='hdc' bus='scsi'/>
	I1028 11:52:58.572513   95151 main.go:141] libmachine: (ha-273199)       <readonly/>
	I1028 11:52:58.572529   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572544   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='disk'>
	I1028 11:52:58.572557   95151 main.go:141] libmachine: (ha-273199)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:52:58.572570   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk'/>
	I1028 11:52:58.572580   95151 main.go:141] libmachine: (ha-273199)       <target dev='hda' bus='virtio'/>
	I1028 11:52:58.572589   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572599   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572625   95151 main.go:141] libmachine: (ha-273199)       <source network='mk-ha-273199'/>
	I1028 11:52:58.572647   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572659   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572669   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572681   95151 main.go:141] libmachine: (ha-273199)       <source network='default'/>
	I1028 11:52:58.572689   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572698   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572708   95151 main.go:141] libmachine: (ha-273199)     <serial type='pty'>
	I1028 11:52:58.572719   95151 main.go:141] libmachine: (ha-273199)       <target port='0'/>
	I1028 11:52:58.572747   95151 main.go:141] libmachine: (ha-273199)     </serial>
	I1028 11:52:58.572759   95151 main.go:141] libmachine: (ha-273199)     <console type='pty'>
	I1028 11:52:58.572769   95151 main.go:141] libmachine: (ha-273199)       <target type='serial' port='0'/>
	I1028 11:52:58.572780   95151 main.go:141] libmachine: (ha-273199)     </console>
	I1028 11:52:58.572789   95151 main.go:141] libmachine: (ha-273199)     <rng model='virtio'>
	I1028 11:52:58.572801   95151 main.go:141] libmachine: (ha-273199)       <backend model='random'>/dev/random</backend>
	I1028 11:52:58.572815   95151 main.go:141] libmachine: (ha-273199)     </rng>
	I1028 11:52:58.572825   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572833   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572844   95151 main.go:141] libmachine: (ha-273199)   </devices>
	I1028 11:52:58.572852   95151 main.go:141] libmachine: (ha-273199) </domain>
	I1028 11:52:58.572861   95151 main.go:141] libmachine: (ha-273199) 
	I1028 11:52:58.577134   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:42:ba:53 in network default
	I1028 11:52:58.577786   95151 main.go:141] libmachine: (ha-273199) Ensuring networks are active...
	I1028 11:52:58.577821   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:58.578546   95151 main.go:141] libmachine: (ha-273199) Ensuring network default is active
	I1028 11:52:58.578856   95151 main.go:141] libmachine: (ha-273199) Ensuring network mk-ha-273199 is active
	I1028 11:52:58.579358   95151 main.go:141] libmachine: (ha-273199) Getting domain xml...
	I1028 11:52:58.580118   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:59.782570   95151 main.go:141] libmachine: (ha-273199) Waiting to get IP...
	I1028 11:52:59.783496   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:59.783901   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:52:59.783927   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:59.783876   95174 retry.go:31] will retry after 311.934457ms: waiting for machine to come up
	I1028 11:53:00.097445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.097916   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.097939   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.097877   95174 retry.go:31] will retry after 388.795801ms: waiting for machine to come up
	I1028 11:53:00.488689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.489130   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.489162   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.489047   95174 retry.go:31] will retry after 341.439374ms: waiting for machine to come up
	I1028 11:53:00.831825   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.832326   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.832354   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.832259   95174 retry.go:31] will retry after 537.545151ms: waiting for machine to come up
	I1028 11:53:01.371089   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.371572   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.371603   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.371503   95174 retry.go:31] will retry after 575.351282ms: waiting for machine to come up
	I1028 11:53:01.948343   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.948756   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.948778   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.948711   95174 retry.go:31] will retry after 886.467527ms: waiting for machine to come up
	I1028 11:53:02.836558   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:02.836900   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:02.836930   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:02.836853   95174 retry.go:31] will retry after 1.015980502s: waiting for machine to come up
	I1028 11:53:03.854959   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:03.855391   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:03.855437   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:03.855271   95174 retry.go:31] will retry after 1.050486499s: waiting for machine to come up
	I1028 11:53:04.907614   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:04.908201   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:04.908229   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:04.908145   95174 retry.go:31] will retry after 1.491832435s: waiting for machine to come up
	I1028 11:53:06.401910   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:06.402491   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:06.402518   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:06.402445   95174 retry.go:31] will retry after 1.441307708s: waiting for machine to come up
	I1028 11:53:07.846099   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:07.846578   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:07.846619   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:07.846526   95174 retry.go:31] will retry after 2.820165032s: waiting for machine to come up
	I1028 11:53:10.670238   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:10.670586   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:10.670616   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:10.670541   95174 retry.go:31] will retry after 2.961295833s: waiting for machine to come up
	I1028 11:53:13.633316   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:13.633782   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:13.633805   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:13.633732   95174 retry.go:31] will retry after 3.308614209s: waiting for machine to come up
	I1028 11:53:16.945522   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:16.946011   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:16.946110   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:16.946030   95174 retry.go:31] will retry after 3.990479431s: waiting for machine to come up
	I1028 11:53:20.937712   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938109   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has current primary IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938130   95151 main.go:141] libmachine: (ha-273199) Found IP for machine: 192.168.39.208
	I1028 11:53:20.938142   95151 main.go:141] libmachine: (ha-273199) Reserving static IP address...
	I1028 11:53:20.938499   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find host DHCP lease matching {name: "ha-273199", mac: "52:54:00:22:d4:52", ip: "192.168.39.208"} in network mk-ha-273199
	I1028 11:53:21.008969   95151 main.go:141] libmachine: (ha-273199) DBG | Getting to WaitForSSH function...
	I1028 11:53:21.008999   95151 main.go:141] libmachine: (ha-273199) Reserved static IP address: 192.168.39.208
	I1028 11:53:21.009011   95151 main.go:141] libmachine: (ha-273199) Waiting for SSH to be available...
	I1028 11:53:21.011668   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012047   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.012076   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012164   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH client type: external
	I1028 11:53:21.012204   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa (-rw-------)
	I1028 11:53:21.012233   95151 main.go:141] libmachine: (ha-273199) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:53:21.012252   95151 main.go:141] libmachine: (ha-273199) DBG | About to run SSH command:
	I1028 11:53:21.012267   95151 main.go:141] libmachine: (ha-273199) DBG | exit 0
	I1028 11:53:21.139407   95151 main.go:141] libmachine: (ha-273199) DBG | SSH cmd err, output: <nil>: 
	I1028 11:53:21.139608   95151 main.go:141] libmachine: (ha-273199) KVM machine creation complete!
	I1028 11:53:21.140109   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:21.140683   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.140882   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.141093   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:53:21.141114   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:21.142660   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:53:21.142693   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:53:21.142699   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:53:21.142707   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.144906   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145252   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.145272   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145401   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.145570   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145700   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145811   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.145966   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.146169   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.146182   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:53:21.258494   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.258518   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:53:21.258525   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.261399   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.261893   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.261920   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.262110   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.262319   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262635   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.262887   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.263058   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.263068   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:53:21.376384   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:53:21.376474   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:53:21.376484   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:53:21.376495   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376737   95151 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 11:53:21.376768   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376959   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.379689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380146   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.380176   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380378   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.380584   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380744   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380879   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.381094   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.381292   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.381311   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 11:53:21.505313   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 11:53:21.505340   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.507973   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508308   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.508335   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508498   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.508627   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508764   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.509011   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.509180   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.509205   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:53:21.627427   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.627469   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:53:21.627526   95151 buildroot.go:174] setting up certificates
	I1028 11:53:21.627546   95151 provision.go:84] configureAuth start
	I1028 11:53:21.627563   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.627864   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:21.630491   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.630851   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.630879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.631007   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.633459   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.633874   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.633904   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.634035   95151 provision.go:143] copyHostCerts
	I1028 11:53:21.634064   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634109   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:53:21.634121   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634183   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:53:21.634289   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634308   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:53:21.634312   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634344   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:53:21.634423   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634439   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:53:21.634443   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634469   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:53:21.634525   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 11:53:21.941769   95151 provision.go:177] copyRemoteCerts
	I1028 11:53:21.941844   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:53:21.941871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.944301   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944588   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.944615   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944775   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.945004   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.945172   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.945312   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.028802   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:53:22.028910   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:53:22.051394   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:53:22.051457   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:53:22.072047   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:53:22.072099   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:53:22.092704   95151 provision.go:87] duration metric: took 465.141947ms to configureAuth
	I1028 11:53:22.092729   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:53:22.092901   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:22.092986   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.095606   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.095961   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.095988   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.096168   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.096372   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096528   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096655   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.096802   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.096954   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.096969   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:53:22.312757   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:53:22.312785   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:53:22.312806   95151 main.go:141] libmachine: (ha-273199) Calling .GetURL
	I1028 11:53:22.313992   95151 main.go:141] libmachine: (ha-273199) DBG | Using libvirt version 6000000
	I1028 11:53:22.316240   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316567   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.316595   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316828   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:53:22.316850   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:53:22.316861   95151 client.go:171] duration metric: took 24.31961411s to LocalClient.Create
	I1028 11:53:22.316914   95151 start.go:167] duration metric: took 24.319696986s to libmachine.API.Create "ha-273199"
	I1028 11:53:22.316928   95151 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 11:53:22.316942   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:53:22.316962   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.317200   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:53:22.317223   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.319445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320158   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.320178   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320347   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.320534   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.320674   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.320778   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.406034   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:53:22.409957   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:53:22.409983   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:53:22.410056   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:53:22.410194   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:53:22.410209   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:53:22.410362   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:53:22.418934   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:22.439625   95151 start.go:296] duration metric: took 122.683745ms for postStartSetup
	I1028 11:53:22.439684   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:22.440268   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.442702   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443017   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.443035   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443281   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:22.443438   95151 start.go:128] duration metric: took 24.465239541s to createHost
	I1028 11:53:22.443459   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.446282   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446621   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.446650   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446768   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.446935   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447095   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447222   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.447404   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.447574   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.447589   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:53:22.559751   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116402.538168741
	
	I1028 11:53:22.559780   95151 fix.go:216] guest clock: 1730116402.538168741
	I1028 11:53:22.559788   95151 fix.go:229] Guest: 2024-10-28 11:53:22.538168741 +0000 UTC Remote: 2024-10-28 11:53:22.443448629 +0000 UTC m=+24.575720280 (delta=94.720112ms)
	I1028 11:53:22.559821   95151 fix.go:200] guest clock delta is within tolerance: 94.720112ms
	I1028 11:53:22.559826   95151 start.go:83] releasing machines lock for "ha-273199", held for 24.581718789s
	I1028 11:53:22.559851   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.560134   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.562796   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563147   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.563185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563312   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563844   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563988   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.564076   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:53:22.564130   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.564190   95151 ssh_runner.go:195] Run: cat /version.json
	I1028 11:53:22.564216   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.566705   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.566929   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567041   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567064   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567296   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567390   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567416   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567469   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567580   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567668   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567738   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567794   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.567840   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567980   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.670647   95151 ssh_runner.go:195] Run: systemctl --version
	I1028 11:53:22.676078   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:53:22.830303   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:53:22.836224   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:53:22.836288   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:53:22.850695   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:53:22.850718   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:53:22.850775   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:53:22.865306   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:53:22.877632   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:53:22.877682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:53:22.889956   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:53:22.901677   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:53:23.007362   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:53:23.168538   95151 docker.go:233] disabling docker service ...
	I1028 11:53:23.168615   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:53:23.181374   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:53:23.192932   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:53:23.310662   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:53:23.424314   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:53:23.437058   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:53:23.453309   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:53:23.453391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.462468   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:53:23.462530   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.471391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.480284   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.489458   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:53:23.498558   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.507348   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.522430   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.531223   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:53:23.539417   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:53:23.539455   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:53:23.551001   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:53:23.559053   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:23.661360   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:53:23.745420   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:53:23.745500   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:53:23.749645   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:53:23.749737   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:53:23.753175   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:53:23.787639   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:53:23.787732   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.812312   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.837983   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:53:23.839279   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:23.841862   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842156   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:23.842185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842344   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:53:23.845848   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:23.857277   95151 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:53:23.857375   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:23.857429   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:23.885745   95151 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:53:23.885803   95151 ssh_runner.go:195] Run: which lz4
	I1028 11:53:23.889147   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:53:23.889231   95151 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:53:23.892744   95151 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:53:23.892778   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:53:24.999101   95151 crio.go:462] duration metric: took 1.10988801s to copy over tarball
	I1028 11:53:24.999192   95151 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:53:26.940236   95151 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.941006419s)
	I1028 11:53:26.940272   95151 crio.go:469] duration metric: took 1.941134954s to extract the tarball
	I1028 11:53:26.940283   95151 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:53:26.975750   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:27.015231   95151 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:53:27.015255   95151 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:53:27.015267   95151 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 11:53:27.015382   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:53:27.015466   95151 ssh_runner.go:195] Run: crio config
	I1028 11:53:27.056277   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:27.056302   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:27.056316   95151 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:53:27.056348   95151 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:53:27.056497   95151 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:53:27.056525   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:53:27.056581   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:53:27.072483   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:53:27.072593   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:53:27.072658   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:53:27.081034   95151 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:53:27.081092   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:53:27.089111   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:53:27.103592   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:53:27.118272   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:53:27.132197   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:53:27.146233   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:53:27.149485   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:27.160138   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:27.266620   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:53:27.282436   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 11:53:27.282457   95151 certs.go:194] generating shared ca certs ...
	I1028 11:53:27.282478   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.282670   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:53:27.282728   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:53:27.282741   95151 certs.go:256] generating profile certs ...
	I1028 11:53:27.282809   95151 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:53:27.282826   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt with IP's: []
	I1028 11:53:27.352056   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt ...
	I1028 11:53:27.352083   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt: {Name:mk85ba9e2d7e36c2dc386074345191c8f41db2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352257   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key ...
	I1028 11:53:27.352268   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key: {Name:mk9e399a746995b3286d90f34445304b7c10dcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352359   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602
	I1028 11:53:27.352376   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I1028 11:53:27.701864   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 ...
	I1028 11:53:27.701927   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602: {Name:mkd8347f84237c1adf80fa2979e2851e438e06db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702124   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 ...
	I1028 11:53:27.702141   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602: {Name:mk8022b5d8b42b8f2926882e2d9f76f284ea38fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702238   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:53:27.702318   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:53:27.702367   95151 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:53:27.702384   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt with IP's: []
	I1028 11:53:27.887171   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt ...
	I1028 11:53:27.887202   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt: {Name:mk8df5a7b5c3f3d68e29bbf5b564443cc1d6c268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887348   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key ...
	I1028 11:53:27.887359   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key: {Name:mk563997b82cf259c7f4075de274f929660222b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887428   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:53:27.887444   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:53:27.887455   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:53:27.887469   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:53:27.887479   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:53:27.887493   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:53:27.887505   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:53:27.887517   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:53:27.887565   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:53:27.887608   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:53:27.887618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:53:27.887660   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:53:27.887680   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:53:27.887702   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:53:27.887740   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:27.887767   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:53:27.887780   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:27.887797   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:53:27.888376   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:53:27.912711   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:53:27.933465   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:53:27.954641   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:53:27.975959   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:53:27.996205   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:53:28.020327   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:53:28.061582   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:53:28.089945   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:53:28.110791   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:53:28.131009   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:53:28.150891   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:53:28.165153   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:53:28.170365   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:53:28.179779   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183529   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183568   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.188718   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:53:28.197725   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:53:28.206747   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210524   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210567   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.215456   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:53:28.224449   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:53:28.233481   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237734   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237779   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.242623   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:53:28.251661   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:53:28.255167   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:53:28.255214   95151 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:53:28.255281   95151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:53:28.255311   95151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:53:28.288882   95151 cri.go:89] found id: ""
	I1028 11:53:28.288966   95151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:53:28.297523   95151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:53:28.306258   95151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:53:28.314625   95151 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:53:28.314641   95151 kubeadm.go:157] found existing configuration files:
	
	I1028 11:53:28.314676   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:53:28.322612   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:53:28.322668   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:53:28.330792   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:53:28.338690   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:53:28.338727   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:53:28.346773   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.354775   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:53:28.354815   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.362916   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:53:28.370667   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:53:28.370718   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:53:28.378723   95151 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:53:28.563600   95151 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:53:38.972007   95151 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:53:38.972072   95151 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:53:38.972185   95151 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:53:38.972293   95151 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:53:38.972416   95151 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:53:38.972521   95151 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:53:38.974416   95151 out.go:235]   - Generating certificates and keys ...
	I1028 11:53:38.974509   95151 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:53:38.974601   95151 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:53:38.974706   95151 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:53:38.974787   95151 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:53:38.974879   95151 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:53:38.974959   95151 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:53:38.975036   95151 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:53:38.975286   95151 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975365   95151 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:53:38.975516   95151 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975611   95151 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:53:38.975722   95151 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:53:38.975797   95151 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:53:38.975877   95151 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:53:38.975944   95151 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:53:38.976014   95151 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:53:38.976064   95151 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:53:38.976141   95151 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:53:38.976202   95151 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:53:38.976272   95151 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:53:38.976334   95151 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:53:38.977999   95151 out.go:235]   - Booting up control plane ...
	I1028 11:53:38.978106   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:53:38.978178   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:53:38.978240   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:53:38.978347   95151 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:53:38.978445   95151 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:53:38.978486   95151 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:53:38.978635   95151 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:53:38.978759   95151 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:53:38.978849   95151 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498504s
	I1028 11:53:38.978951   95151 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:53:38.979035   95151 kubeadm.go:310] [api-check] The API server is healthy after 5.77087672s
	I1028 11:53:38.979160   95151 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:53:38.979301   95151 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:53:38.979391   95151 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:53:38.979587   95151 kubeadm.go:310] [mark-control-plane] Marking the node ha-273199 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:53:38.979669   95151 kubeadm.go:310] [bootstrap-token] Using token: 2y659k.kh228wx7fnaw6qlw
	I1028 11:53:38.980850   95151 out.go:235]   - Configuring RBAC rules ...
	I1028 11:53:38.980953   95151 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:53:38.981063   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:53:38.981194   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:53:38.981315   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:53:38.981461   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:53:38.981577   95151 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:53:38.981701   95151 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:53:38.981766   95151 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:53:38.981845   95151 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:53:38.981853   95151 kubeadm.go:310] 
	I1028 11:53:38.981937   95151 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:53:38.981950   95151 kubeadm.go:310] 
	I1028 11:53:38.982070   95151 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:53:38.982082   95151 kubeadm.go:310] 
	I1028 11:53:38.982120   95151 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:53:38.982205   95151 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:53:38.982281   95151 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:53:38.982294   95151 kubeadm.go:310] 
	I1028 11:53:38.982369   95151 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:53:38.982381   95151 kubeadm.go:310] 
	I1028 11:53:38.982451   95151 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:53:38.982463   95151 kubeadm.go:310] 
	I1028 11:53:38.982538   95151 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:53:38.982640   95151 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:53:38.982741   95151 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:53:38.982752   95151 kubeadm.go:310] 
	I1028 11:53:38.982827   95151 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:53:38.982895   95151 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:53:38.982901   95151 kubeadm.go:310] 
	I1028 11:53:38.982972   95151 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983065   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:53:38.983084   95151 kubeadm.go:310] 	--control-plane 
	I1028 11:53:38.983090   95151 kubeadm.go:310] 
	I1028 11:53:38.983184   95151 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:53:38.983205   95151 kubeadm.go:310] 
	I1028 11:53:38.983290   95151 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983394   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:53:38.983404   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:38.983412   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:38.985768   95151 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:53:38.987136   95151 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:53:38.992611   95151 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:53:38.992633   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:53:39.010322   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:53:39.369131   95151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:53:39.369214   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199 minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=true
	I1028 11:53:39.369218   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:39.407348   95151 ops.go:34] apiserver oom_adj: -16
	I1028 11:53:39.512261   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.013130   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.512492   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.012760   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.512614   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.013105   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.513113   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.013197   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.130930   95151 kubeadm.go:1113] duration metric: took 3.761785969s to wait for elevateKubeSystemPrivileges
	I1028 11:53:43.130968   95151 kubeadm.go:394] duration metric: took 14.875757661s to StartCluster
	I1028 11:53:43.130992   95151 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.131082   95151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.131868   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.132066   95151 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.132080   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:53:43.132092   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:53:43.132110   95151 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:53:43.132191   95151 addons.go:69] Setting storage-provisioner=true in profile "ha-273199"
	I1028 11:53:43.132211   95151 addons.go:234] Setting addon storage-provisioner=true in "ha-273199"
	I1028 11:53:43.132226   95151 addons.go:69] Setting default-storageclass=true in profile "ha-273199"
	I1028 11:53:43.132243   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.132254   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.132263   95151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-273199"
	I1028 11:53:43.132656   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132704   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.132733   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132778   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.148009   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1028 11:53:43.148148   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I1028 11:53:43.148527   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.148654   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.149031   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149050   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149159   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149183   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149384   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149521   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149709   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.149923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.149968   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.152241   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.152594   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:53:43.153153   95151 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:53:43.153487   95151 addons.go:234] Setting addon default-storageclass=true in "ha-273199"
	I1028 11:53:43.153537   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.153923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.153966   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.165112   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I1028 11:53:43.165628   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.166122   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.166140   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.166447   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.166644   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.168390   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.168673   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1028 11:53:43.169162   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.169675   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.169697   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.170033   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.170484   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.170504   95151 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:53:43.170529   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.172043   95151 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.172062   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:53:43.172076   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.174879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175341   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.175404   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175532   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.175676   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.175782   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.175869   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.188178   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I1028 11:53:43.188778   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.189356   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.189374   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.189736   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.189945   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.191684   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.191903   95151 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.191914   95151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:53:43.191927   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.195100   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195553   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.195576   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195757   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.195929   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.196073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.196212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.240072   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:53:43.320825   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.357607   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.543521   95151 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:53:43.793100   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793126   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793180   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793204   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793468   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793490   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793520   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793527   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793535   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793541   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793554   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793572   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793581   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793594   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793790   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793822   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793830   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793837   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793798   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793900   95151 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:53:43.793919   95151 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:53:43.794073   95151 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:53:43.794085   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.794095   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.794103   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.805561   95151 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:53:43.806144   95151 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:53:43.806158   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.806166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.806169   95151 round_trippers.go:473]     Content-Type: application/json
	I1028 11:53:43.806171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.809243   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:53:43.809609   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.809624   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.809925   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.809942   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.809968   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.812285   95151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:53:43.813517   95151 addons.go:510] duration metric: took 681.412756ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:53:43.813552   95151 start.go:246] waiting for cluster config update ...
	I1028 11:53:43.813579   95151 start.go:255] writing updated cluster config ...
	I1028 11:53:43.815032   95151 out.go:201] 
	I1028 11:53:43.816430   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.816508   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.817974   95151 out.go:177] * Starting "ha-273199-m02" control-plane node in "ha-273199" cluster
	I1028 11:53:43.819185   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:43.819208   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:53:43.819300   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:53:43.819313   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:53:43.819381   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.819558   95151 start.go:360] acquireMachinesLock for ha-273199-m02: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:53:43.819623   95151 start.go:364] duration metric: took 33.288µs to acquireMachinesLock for "ha-273199-m02"
	I1028 11:53:43.819661   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.819740   95151 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:53:43.821273   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:53:43.821359   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.821393   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.836503   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1028 11:53:43.837015   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.837597   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.837620   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.837996   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.838155   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:53:43.838314   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:53:43.838482   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:53:43.838517   95151 client.go:168] LocalClient.Create starting
	I1028 11:53:43.838554   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:53:43.838592   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838613   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838664   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:53:43.838684   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838696   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838711   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:53:43.838718   95151 main.go:141] libmachine: (ha-273199-m02) Calling .PreCreateCheck
	I1028 11:53:43.838865   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:53:43.839217   95151 main.go:141] libmachine: Creating machine...
	I1028 11:53:43.839229   95151 main.go:141] libmachine: (ha-273199-m02) Calling .Create
	I1028 11:53:43.839340   95151 main.go:141] libmachine: (ha-273199-m02) Creating KVM machine...
	I1028 11:53:43.840585   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing default KVM network
	I1028 11:53:43.840677   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing private KVM network mk-ha-273199
	I1028 11:53:43.840819   95151 main.go:141] libmachine: (ha-273199-m02) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:43.840837   95151 main.go:141] libmachine: (ha-273199-m02) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:53:43.840944   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:43.840827   95521 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:43.841035   95151 main.go:141] libmachine: (ha-273199-m02) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:53:44.101967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.101844   95521 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa...
	I1028 11:53:44.215652   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215521   95521 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk...
	I1028 11:53:44.215686   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing magic tar header
	I1028 11:53:44.215700   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing SSH key tar header
	I1028 11:53:44.215717   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215655   95521 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:44.215805   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02
	I1028 11:53:44.215837   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:53:44.215846   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 (perms=drwx------)
	I1028 11:53:44.215856   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:53:44.215863   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:53:44.215873   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:53:44.215879   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:53:44.215889   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:53:44.215894   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:44.215903   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:44.215911   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:53:44.215919   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:53:44.215925   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:53:44.215930   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home
	I1028 11:53:44.215935   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Skipping /home - not owner
	I1028 11:53:44.216891   95151 main.go:141] libmachine: (ha-273199-m02) define libvirt domain using xml: 
	I1028 11:53:44.216918   95151 main.go:141] libmachine: (ha-273199-m02) <domain type='kvm'>
	I1028 11:53:44.216933   95151 main.go:141] libmachine: (ha-273199-m02)   <name>ha-273199-m02</name>
	I1028 11:53:44.216941   95151 main.go:141] libmachine: (ha-273199-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:53:44.216950   95151 main.go:141] libmachine: (ha-273199-m02)   <vcpu>2</vcpu>
	I1028 11:53:44.216957   95151 main.go:141] libmachine: (ha-273199-m02)   <features>
	I1028 11:53:44.216966   95151 main.go:141] libmachine: (ha-273199-m02)     <acpi/>
	I1028 11:53:44.216976   95151 main.go:141] libmachine: (ha-273199-m02)     <apic/>
	I1028 11:53:44.216983   95151 main.go:141] libmachine: (ha-273199-m02)     <pae/>
	I1028 11:53:44.216989   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.216999   95151 main.go:141] libmachine: (ha-273199-m02)   </features>
	I1028 11:53:44.217007   95151 main.go:141] libmachine: (ha-273199-m02)   <cpu mode='host-passthrough'>
	I1028 11:53:44.217034   95151 main.go:141] libmachine: (ha-273199-m02)   
	I1028 11:53:44.217056   95151 main.go:141] libmachine: (ha-273199-m02)   </cpu>
	I1028 11:53:44.217068   95151 main.go:141] libmachine: (ha-273199-m02)   <os>
	I1028 11:53:44.217079   95151 main.go:141] libmachine: (ha-273199-m02)     <type>hvm</type>
	I1028 11:53:44.217093   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='cdrom'/>
	I1028 11:53:44.217102   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='hd'/>
	I1028 11:53:44.217112   95151 main.go:141] libmachine: (ha-273199-m02)     <bootmenu enable='no'/>
	I1028 11:53:44.217123   95151 main.go:141] libmachine: (ha-273199-m02)   </os>
	I1028 11:53:44.217133   95151 main.go:141] libmachine: (ha-273199-m02)   <devices>
	I1028 11:53:44.217140   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='cdrom'>
	I1028 11:53:44.217157   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/boot2docker.iso'/>
	I1028 11:53:44.217172   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:53:44.217183   95151 main.go:141] libmachine: (ha-273199-m02)       <readonly/>
	I1028 11:53:44.217196   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217208   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='disk'>
	I1028 11:53:44.217219   95151 main.go:141] libmachine: (ha-273199-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:53:44.217231   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk'/>
	I1028 11:53:44.217243   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:53:44.217254   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217268   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217279   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='mk-ha-273199'/>
	I1028 11:53:44.217289   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217297   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217306   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217311   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='default'/>
	I1028 11:53:44.217318   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217327   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217340   95151 main.go:141] libmachine: (ha-273199-m02)     <serial type='pty'>
	I1028 11:53:44.217349   95151 main.go:141] libmachine: (ha-273199-m02)       <target port='0'/>
	I1028 11:53:44.217361   95151 main.go:141] libmachine: (ha-273199-m02)     </serial>
	I1028 11:53:44.217372   95151 main.go:141] libmachine: (ha-273199-m02)     <console type='pty'>
	I1028 11:53:44.217382   95151 main.go:141] libmachine: (ha-273199-m02)       <target type='serial' port='0'/>
	I1028 11:53:44.217390   95151 main.go:141] libmachine: (ha-273199-m02)     </console>
	I1028 11:53:44.217400   95151 main.go:141] libmachine: (ha-273199-m02)     <rng model='virtio'>
	I1028 11:53:44.217420   95151 main.go:141] libmachine: (ha-273199-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:53:44.217438   95151 main.go:141] libmachine: (ha-273199-m02)     </rng>
	I1028 11:53:44.217448   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217460   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217472   95151 main.go:141] libmachine: (ha-273199-m02)   </devices>
	I1028 11:53:44.217481   95151 main.go:141] libmachine: (ha-273199-m02) </domain>
	I1028 11:53:44.217489   95151 main.go:141] libmachine: (ha-273199-m02) 
	I1028 11:53:44.223932   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:5f:41:a3 in network default
	I1028 11:53:44.224544   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:44.224583   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring networks are active...
	I1028 11:53:44.225374   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network default is active
	I1028 11:53:44.225816   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network mk-ha-273199 is active
	I1028 11:53:44.226251   95151 main.go:141] libmachine: (ha-273199-m02) Getting domain xml...
	I1028 11:53:44.227023   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:45.439147   95151 main.go:141] libmachine: (ha-273199-m02) Waiting to get IP...
	I1028 11:53:45.440088   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.440554   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.440583   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.440482   95521 retry.go:31] will retry after 269.373557ms: waiting for machine to come up
	I1028 11:53:45.712000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.712443   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.712474   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.712389   95521 retry.go:31] will retry after 298.904949ms: waiting for machine to come up
	I1028 11:53:46.012797   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.013174   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.013203   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.013118   95521 retry.go:31] will retry after 446.110397ms: waiting for machine to come up
	I1028 11:53:46.460766   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.461220   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.461245   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.461168   95521 retry.go:31] will retry after 398.131323ms: waiting for machine to come up
	I1028 11:53:46.860852   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.861266   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.861297   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.861218   95521 retry.go:31] will retry after 575.124652ms: waiting for machine to come up
	I1028 11:53:47.437756   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:47.438185   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:47.438208   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:47.438138   95521 retry.go:31] will retry after 828.228762ms: waiting for machine to come up
	I1028 11:53:48.267451   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:48.267942   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:48.267968   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:48.267911   95521 retry.go:31] will retry after 1.143938031s: waiting for machine to come up
	I1028 11:53:49.414967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:49.415400   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:49.415424   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:49.415361   95521 retry.go:31] will retry after 1.300605887s: waiting for machine to come up
	I1028 11:53:50.717749   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:50.718139   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:50.718173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:50.718072   95521 retry.go:31] will retry after 1.594414229s: waiting for machine to come up
	I1028 11:53:52.314529   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:52.314977   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:52.315000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:52.314931   95521 retry.go:31] will retry after 1.837671448s: waiting for machine to come up
	I1028 11:53:54.154075   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:54.154455   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:54.154488   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:54.154386   95521 retry.go:31] will retry after 2.115441874s: waiting for machine to come up
	I1028 11:53:56.272674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:56.273183   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:56.273216   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:56.273084   95521 retry.go:31] will retry after 3.620483706s: waiting for machine to come up
	I1028 11:53:59.894801   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:59.895232   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:59.895260   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:59.895175   95521 retry.go:31] will retry after 3.99432381s: waiting for machine to come up
	I1028 11:54:03.891608   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892071   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892098   95151 main.go:141] libmachine: (ha-273199-m02) Found IP for machine: 192.168.39.225
	I1028 11:54:03.892108   95151 main.go:141] libmachine: (ha-273199-m02) Reserving static IP address...
	I1028 11:54:03.892498   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find host DHCP lease matching {name: "ha-273199-m02", mac: "52:54:00:ac:c5:96", ip: "192.168.39.225"} in network mk-ha-273199
	I1028 11:54:03.966695   95151 main.go:141] libmachine: (ha-273199-m02) Reserved static IP address: 192.168.39.225
	I1028 11:54:03.966737   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Getting to WaitForSSH function...
	I1028 11:54:03.966746   95151 main.go:141] libmachine: (ha-273199-m02) Waiting for SSH to be available...
	I1028 11:54:03.969754   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970154   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:03.970188   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970315   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH client type: external
	I1028 11:54:03.970338   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa (-rw-------)
	I1028 11:54:03.970367   95151 main.go:141] libmachine: (ha-273199-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:54:03.970390   95151 main.go:141] libmachine: (ha-273199-m02) DBG | About to run SSH command:
	I1028 11:54:03.970403   95151 main.go:141] libmachine: (ha-273199-m02) DBG | exit 0
	I1028 11:54:04.099273   95151 main.go:141] libmachine: (ha-273199-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:54:04.099507   95151 main.go:141] libmachine: (ha-273199-m02) KVM machine creation complete!
	I1028 11:54:04.099831   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:04.100498   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100706   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100853   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:54:04.100870   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetState
	I1028 11:54:04.101944   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:54:04.101958   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:54:04.101966   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:54:04.101973   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.104164   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104483   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.104506   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104767   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.104942   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105094   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105250   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.105441   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.105654   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.105665   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:54:04.218542   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.218568   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:54:04.218578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.221233   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221723   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.221745   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221945   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.222117   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222361   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222486   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.222628   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.222833   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.222844   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:54:04.335872   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:54:04.335945   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:54:04.335957   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:54:04.335971   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336202   95151 buildroot.go:166] provisioning hostname "ha-273199-m02"
	I1028 11:54:04.336228   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336396   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.338798   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.339199   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339341   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.339521   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339681   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339813   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.339995   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.340196   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.340212   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m02 && echo "ha-273199-m02" | sudo tee /etc/hostname
	I1028 11:54:04.470703   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m02
	
	I1028 11:54:04.470739   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.473349   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473761   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.473785   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473981   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.474167   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474373   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474538   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.474717   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.474941   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.474960   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:54:04.595447   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.595480   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:54:04.595502   95151 buildroot.go:174] setting up certificates
	I1028 11:54:04.595513   95151 provision.go:84] configureAuth start
	I1028 11:54:04.595525   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.595847   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:04.598618   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599070   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.599093   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599227   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.601800   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602155   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.602179   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602325   95151 provision.go:143] copyHostCerts
	I1028 11:54:04.602362   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602399   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:54:04.602409   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602488   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:54:04.602621   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602649   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:54:04.602654   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602686   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:54:04.602741   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602762   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:54:04.602770   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602806   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:54:04.602864   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m02 san=[127.0.0.1 192.168.39.225 ha-273199-m02 localhost minikube]
	I1028 11:54:04.712606   95151 provision.go:177] copyRemoteCerts
	I1028 11:54:04.712663   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:54:04.712689   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.715518   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.715885   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.715912   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.716119   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.716297   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.716427   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.716589   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:04.800760   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:54:04.800829   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:54:04.821891   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:54:04.821965   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:54:04.847580   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:54:04.847678   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:54:04.870711   95151 provision.go:87] duration metric: took 275.184548ms to configureAuth
	I1028 11:54:04.870736   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:54:04.870943   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:04.871041   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.873592   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.873927   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.873960   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.874110   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.874287   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874448   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874594   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.874763   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.874973   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.874993   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:54:05.089509   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:54:05.089537   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:54:05.089548   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetURL
	I1028 11:54:05.090747   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using libvirt version 6000000
	I1028 11:54:05.092647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.092983   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.093012   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.093142   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:54:05.093158   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:54:05.093166   95151 client.go:171] duration metric: took 21.254637002s to LocalClient.Create
	I1028 11:54:05.093189   95151 start.go:167] duration metric: took 21.254710604s to libmachine.API.Create "ha-273199"
	I1028 11:54:05.093198   95151 start.go:293] postStartSetup for "ha-273199-m02" (driver="kvm2")
	I1028 11:54:05.093210   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:54:05.093234   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.093471   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:54:05.093501   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.095736   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096090   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.096118   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096277   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.096451   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.096607   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.096752   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.185260   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:54:05.189209   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:54:05.189235   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:54:05.189307   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:54:05.189410   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:54:05.189427   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:54:05.189540   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:54:05.197852   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:05.218582   95151 start.go:296] duration metric: took 125.373729ms for postStartSetup
	I1028 11:54:05.218639   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:05.219202   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.221996   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222347   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.222371   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222675   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:05.222856   95151 start.go:128] duration metric: took 21.403106118s to createHost
	I1028 11:54:05.222880   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.225160   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225457   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.225486   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225646   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.225805   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.225943   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.226048   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.226180   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:05.226400   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:05.226415   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:54:05.335802   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116445.296198293
	
	I1028 11:54:05.335827   95151 fix.go:216] guest clock: 1730116445.296198293
	I1028 11:54:05.335841   95151 fix.go:229] Guest: 2024-10-28 11:54:05.296198293 +0000 UTC Remote: 2024-10-28 11:54:05.222866703 +0000 UTC m=+67.355138355 (delta=73.33159ms)
	I1028 11:54:05.335873   95151 fix.go:200] guest clock delta is within tolerance: 73.33159ms
	I1028 11:54:05.335881   95151 start.go:83] releasing machines lock for "ha-273199-m02", held for 21.516234573s
	I1028 11:54:05.335906   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.336186   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.338574   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.338916   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.338947   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.341021   95151 out.go:177] * Found network options:
	I1028 11:54:05.342553   95151 out.go:177]   - NO_PROXY=192.168.39.208
	W1028 11:54:05.343876   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.343912   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344410   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344601   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344686   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:54:05.344725   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	W1028 11:54:05.344795   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.344870   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:54:05.344892   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.347272   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347603   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.347674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347762   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.347920   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348040   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.348054   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348067   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.348192   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.348264   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.348426   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348717   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.584423   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:54:05.589736   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:54:05.589802   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:54:05.603598   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:54:05.603618   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:54:05.603689   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:54:05.618579   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:54:05.631876   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:54:05.631943   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:54:05.646115   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:54:05.659547   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:54:05.777548   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:54:05.920510   95151 docker.go:233] disabling docker service ...
	I1028 11:54:05.920601   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:54:05.935682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:54:05.948830   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:54:06.089969   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:54:06.214668   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:54:06.227025   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:54:06.243529   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:54:06.243600   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.252888   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:54:06.252945   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.262219   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.271415   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.282109   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:54:06.291692   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.300914   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.316681   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.325900   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:54:06.334164   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:54:06.334217   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:54:06.345295   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:54:06.353414   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:06.469387   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:54:06.564464   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:54:06.564532   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:54:06.570888   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:54:06.570947   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:54:06.574424   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:54:06.609470   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:54:06.609577   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.636484   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.662978   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:54:06.664616   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:54:06.665640   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:06.668607   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.668966   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:06.669004   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.669229   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:54:06.673421   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:06.684696   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:54:06.684909   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:06.685156   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.685193   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.700107   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I1028 11:54:06.700577   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.701057   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.701079   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.701393   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.701590   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:54:06.703274   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:06.703621   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.703693   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.718078   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I1028 11:54:06.718513   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.718987   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.719005   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.719322   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.719504   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:06.719671   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.225
	I1028 11:54:06.719683   95151 certs.go:194] generating shared ca certs ...
	I1028 11:54:06.719702   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.719827   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:54:06.719882   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:54:06.719896   95151 certs.go:256] generating profile certs ...
	I1028 11:54:06.720023   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:54:06.720055   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909
	I1028 11:54:06.720075   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.254]
	I1028 11:54:06.852806   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 ...
	I1028 11:54:06.852843   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909: {Name:mkb8ff493606403d4b0e4c7b0477c06720a08f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853016   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 ...
	I1028 11:54:06.853029   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909: {Name:mkb3a86efc0165669c50f21e172de132f2ce3594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853101   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:54:06.853233   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:54:06.853356   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:54:06.853375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:54:06.853388   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:54:06.853400   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:54:06.853413   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:54:06.853426   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:54:06.853437   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:54:06.853448   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:54:06.853457   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:54:06.853505   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:54:06.853533   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:54:06.853542   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:54:06.853570   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:54:06.853618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:54:06.853648   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:54:06.853686   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:06.853716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:06.853730   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:54:06.853740   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:54:06.853773   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:06.856848   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857257   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:06.857283   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857465   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:06.857654   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:06.857769   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:06.857872   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:06.935983   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:54:06.940830   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:54:06.951512   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:54:06.955415   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:54:06.964440   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:54:06.967840   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:54:06.977901   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:54:06.982116   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:54:06.992655   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:54:06.997042   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:54:07.006289   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:54:07.009936   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:54:07.019550   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:54:07.043269   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:54:07.066117   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:54:07.088035   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:54:07.109468   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:54:07.130767   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:54:07.153514   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:54:07.175748   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:54:07.198209   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:54:07.219569   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:54:07.241366   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:54:07.262724   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:54:07.277348   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:54:07.291720   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:54:07.305550   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:54:07.319528   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:54:07.333567   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:54:07.347382   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:54:07.361182   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:54:07.366165   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:54:07.375271   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379042   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379097   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.384098   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:54:07.393089   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:54:07.402170   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405931   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405973   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.410926   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:54:07.420134   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:54:07.429223   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433088   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433140   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.437953   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:54:07.447048   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:54:07.450389   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:54:07.450445   95151 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1028 11:54:07.450537   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:54:07.450564   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:54:07.450597   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:54:07.463741   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:54:07.463803   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:54:07.463849   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.472253   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:54:07.472293   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.480970   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:54:07.480983   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:54:07.481001   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.481025   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:54:07.481066   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.484605   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:54:07.484635   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:54:08.215699   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.215797   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.220472   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:54:08.220510   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:54:08.302949   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:08.332777   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.332899   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.344780   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:54:08.344827   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:54:08.738465   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:54:08.748651   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 11:54:08.763967   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:54:08.778166   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:54:08.792673   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:54:08.796110   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:08.806415   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:08.913077   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:08.928428   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:08.928936   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:08.929001   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:08.945393   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1028 11:54:08.945922   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:08.946367   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:08.946393   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:08.946734   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:08.946931   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:08.947168   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:54:08.947340   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:54:08.947363   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:08.950295   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.950729   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:08.950759   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.951003   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:08.951292   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:08.951467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:08.951675   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:09.101707   95151 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:09.101780   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I1028 11:54:28.747369   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (19.645557844s)
	I1028 11:54:28.747419   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:54:29.256098   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m02 minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:54:29.382642   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:54:29.487190   95151 start.go:319] duration metric: took 20.540107471s to joinCluster
	I1028 11:54:29.487270   95151 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:29.487603   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:29.489950   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:54:29.491267   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:29.728819   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:29.746970   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:54:29.747328   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:54:29.747474   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:54:29.747814   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:29.747948   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:29.747961   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:29.747972   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:29.747980   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:29.757406   95151 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:54:30.248317   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.248345   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.248355   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.248359   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.255105   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:54:30.748943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.748969   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.748978   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.748984   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.752101   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:31.248899   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.248919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.248928   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.248936   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.251583   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.748337   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.748357   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.748366   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.748371   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.751333   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.751989   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:32.248221   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.248243   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.248251   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.248255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.259191   95151 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:54:32.748148   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.748179   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.748189   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.748194   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.751101   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.249110   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.249135   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.249144   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.249150   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.251769   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.748905   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.748928   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.748937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.748942   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.751961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:33.752497   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:34.248826   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.248847   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.248857   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.248863   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.251279   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:34.748949   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.748976   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.748988   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.748993   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.752114   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:35.248874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.248898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.248906   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.248911   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.251839   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:35.748886   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.748919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.748932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.748940   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.751814   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.248781   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.248808   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.248821   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.248826   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.251662   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.252253   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:36.748294   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.748319   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.748329   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.748343   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.751795   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.248778   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.248807   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.248815   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.248820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.252064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.748876   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.748901   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.748910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.748922   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.752889   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.248910   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.248935   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.248946   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.248951   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.252324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.252974   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:38.748358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.748389   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.748401   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.748410   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.751564   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.248494   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.248515   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.248524   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.248530   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.251902   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.748889   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.748912   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.748920   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.748925   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.751666   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.248637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.248663   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.248675   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.248682   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.251500   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.748631   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.748655   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.748665   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.748671   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.751537   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.752161   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:41.248409   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.248429   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.248437   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.248441   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.251178   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:41.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.748632   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.748641   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.748645   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.751235   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.248135   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.248157   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.248166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.248171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.251061   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.748875   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.748898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.748904   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.748908   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.751883   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.752428   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:43.248728   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.248749   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.248757   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.248760   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.251847   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:43.748532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.748554   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.748562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.748565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.751916   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.248210   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.248233   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.248241   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.248245   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.251111   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:44.749062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.749085   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.749092   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.749096   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.752695   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.753451   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:45.248752   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.248776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.248784   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.248787   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.251702   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:45.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.748635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.748643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.748647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.751481   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:46.248237   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.248261   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.248269   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.248272   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.251677   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:46.748175   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.748196   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.748204   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.748209   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.750924   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.249094   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.249121   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.249133   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.249139   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.251939   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.252527   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:47.748867   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.748890   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.748899   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.748903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.751778   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.248555   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.248585   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.248593   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.248597   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.251510   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.252376   95151 node_ready.go:49] node "ha-273199-m02" has status "Ready":"True"
	I1028 11:54:48.252397   95151 node_ready.go:38] duration metric: took 18.504559305s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:48.252406   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:48.252478   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:48.252487   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.252496   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.252506   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.256049   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:48.261653   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.261730   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:54:48.261739   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.261746   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.261749   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.264166   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.264759   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.264776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.264785   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.264790   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.266666   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.267238   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.267257   95151 pod_ready.go:82] duration metric: took 5.581341ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267267   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267326   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:54:48.267336   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.267346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.267353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.269749   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.270236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.270252   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.270259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.270262   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.272089   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.272472   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.272487   95151 pod_ready.go:82] duration metric: took 5.21491ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272495   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272536   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:54:48.272543   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.272550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.272553   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.274596   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.275004   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.275018   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.275024   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.275028   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.277124   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.277710   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.277730   95151 pod_ready.go:82] duration metric: took 5.229334ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277742   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277804   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:54:48.277816   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.277826   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.277830   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282085   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:48.282776   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.282794   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.282804   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282810   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.284715   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.285139   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.285156   95151 pod_ready.go:82] duration metric: took 7.407951ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.285172   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.449552   95151 request.go:632] Waited for 164.30368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449649   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.449658   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.449662   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.452644   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.649614   95151 request.go:632] Waited for 196.347979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649674   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649678   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.649686   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.649691   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.652639   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.653086   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.653104   95151 pod_ready.go:82] duration metric: took 367.924183ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.653115   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.849567   95151 request.go:632] Waited for 196.382043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849633   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849638   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.849645   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.849650   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.853050   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.049149   95151 request.go:632] Waited for 195.394568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049239   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049247   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.049258   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.049265   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.052619   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.053476   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.053498   95151 pod_ready.go:82] duration metric: took 400.377088ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.053510   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.249514   95151 request.go:632] Waited for 195.91409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249580   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.249588   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.249592   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.252347   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.449321   95151 request.go:632] Waited for 196.389294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449390   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449397   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.449406   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.449409   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.451910   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.452527   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.452552   95151 pod_ready.go:82] duration metric: took 399.03422ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.452565   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.649568   95151 request.go:632] Waited for 196.917152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.649643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.649647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.652785   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.848836   95151 request.go:632] Waited for 195.315288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848913   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848921   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.848932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.848937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.851674   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.852191   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.852210   95151 pod_ready.go:82] duration metric: took 399.639073ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.852221   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.049350   95151 request.go:632] Waited for 197.035616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049425   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049433   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.049443   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.049452   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.052771   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.248743   95151 request.go:632] Waited for 195.280445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248807   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248812   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.248827   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.248832   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.251804   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:50.252387   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.252412   95151 pod_ready.go:82] duration metric: took 400.185555ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.252424   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.449549   95151 request.go:632] Waited for 197.016421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449623   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449628   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.449639   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.449643   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.453027   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.649191   95151 request.go:632] Waited for 195.415709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649276   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649281   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.649289   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.649293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.652536   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.653266   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.653285   95151 pod_ready.go:82] duration metric: took 400.855966ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.653296   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.849376   95151 request.go:632] Waited for 196.004526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849458   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849463   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.849471   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.849475   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.852508   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.049649   95151 request.go:632] Waited for 196.358583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049709   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049715   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.049722   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.049726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.053157   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.053815   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.053835   95151 pod_ready.go:82] duration metric: took 400.533283ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.053846   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.248991   95151 request.go:632] Waited for 195.052058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249059   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249064   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.249072   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.249078   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.252735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.448724   95151 request.go:632] Waited for 195.285595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448790   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448806   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.448820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.448825   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.452721   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.453238   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.453263   95151 pod_ready.go:82] duration metric: took 399.409754ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.453278   95151 pod_ready.go:39] duration metric: took 3.200858022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:51.453306   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:54:51.453378   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:54:51.468618   95151 api_server.go:72] duration metric: took 21.98130215s to wait for apiserver process to appear ...
	I1028 11:54:51.468648   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:54:51.468673   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:54:51.472937   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:54:51.473008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:54:51.473014   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.473022   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.473030   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.473790   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:54:51.473893   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:54:51.473910   95151 api_server.go:131] duration metric: took 5.255617ms to wait for apiserver health ...
	I1028 11:54:51.473917   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:54:51.649350   95151 request.go:632] Waited for 175.3296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649418   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649424   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.649431   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.649436   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.653819   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:51.658610   95151 system_pods.go:59] 17 kube-system pods found
	I1028 11:54:51.658641   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:51.658646   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:51.658651   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:51.658654   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:51.658657   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:51.658660   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:51.658664   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:51.658669   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:51.658674   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:51.658682   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:51.658691   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:51.658696   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:51.658700   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:51.658704   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:51.658707   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:51.658710   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:51.658715   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:51.658722   95151 system_pods.go:74] duration metric: took 184.79709ms to wait for pod list to return data ...
	I1028 11:54:51.658732   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:54:51.849471   95151 request.go:632] Waited for 190.648261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849537   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.849546   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.849549   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.853472   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.853716   95151 default_sa.go:45] found service account: "default"
	I1028 11:54:51.853732   95151 default_sa.go:55] duration metric: took 194.991571ms for default service account to be created ...
	I1028 11:54:51.853742   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:54:52.049206   95151 request.go:632] Waited for 195.38768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049272   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049279   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.049287   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.049293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.055256   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:54:52.060109   95151 system_pods.go:86] 17 kube-system pods found
	I1028 11:54:52.060133   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:52.060139   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:52.060143   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:52.060147   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:52.060151   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:52.060154   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:52.060158   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:52.060162   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:52.060166   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:52.060171   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:52.060175   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:52.060178   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:52.060182   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:52.060185   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:52.060188   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:52.060192   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:52.060196   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:52.060203   95151 system_pods.go:126] duration metric: took 206.45399ms to wait for k8s-apps to be running ...
	I1028 11:54:52.060213   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:54:52.060255   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:52.076447   95151 system_svc.go:56] duration metric: took 16.226067ms WaitForService to wait for kubelet
	I1028 11:54:52.076476   95151 kubeadm.go:582] duration metric: took 22.589167548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:54:52.076506   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:54:52.248935   95151 request.go:632] Waited for 172.334931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.248998   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.249004   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.249011   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.249015   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.252625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:52.253475   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253500   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253515   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253518   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253523   95151 node_conditions.go:105] duration metric: took 177.008634ms to run NodePressure ...
	I1028 11:54:52.253537   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:54:52.253563   95151 start.go:255] writing updated cluster config ...
	I1028 11:54:52.255885   95151 out.go:201] 
	I1028 11:54:52.257299   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:52.257397   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.258847   95151 out.go:177] * Starting "ha-273199-m03" control-plane node in "ha-273199" cluster
	I1028 11:54:52.259962   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:54:52.259986   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:54:52.260095   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:54:52.260118   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:54:52.260241   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.260461   95151 start.go:360] acquireMachinesLock for ha-273199-m03: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:54:52.260509   95151 start.go:364] duration metric: took 28.17µs to acquireMachinesLock for "ha-273199-m03"
	I1028 11:54:52.260527   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:52.260626   95151 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:54:52.262400   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:54:52.262503   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:52.262543   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:52.277859   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I1028 11:54:52.278262   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:52.278738   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:52.278759   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:52.279160   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:52.279351   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:54:52.279503   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:54:52.279669   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:54:52.279701   95151 client.go:168] LocalClient.Create starting
	I1028 11:54:52.279735   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:54:52.279771   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279787   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279863   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:54:52.279888   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279905   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279929   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:54:52.279940   95151 main.go:141] libmachine: (ha-273199-m03) Calling .PreCreateCheck
	I1028 11:54:52.280085   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:54:52.280426   95151 main.go:141] libmachine: Creating machine...
	I1028 11:54:52.280439   95151 main.go:141] libmachine: (ha-273199-m03) Calling .Create
	I1028 11:54:52.280557   95151 main.go:141] libmachine: (ha-273199-m03) Creating KVM machine...
	I1028 11:54:52.281865   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing default KVM network
	I1028 11:54:52.281971   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing private KVM network mk-ha-273199
	I1028 11:54:52.282111   95151 main.go:141] libmachine: (ha-273199-m03) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.282133   95151 main.go:141] libmachine: (ha-273199-m03) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:54:52.282187   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.282077   95896 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.282257   95151 main.go:141] libmachine: (ha-273199-m03) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:54:52.559668   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.559518   95896 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa...
	I1028 11:54:52.735541   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.735336   95896 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk...
	I1028 11:54:52.735589   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing magic tar header
	I1028 11:54:52.735964   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing SSH key tar header
	I1028 11:54:52.736074   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.736016   95896 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.736145   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03
	I1028 11:54:52.736240   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 (perms=drwx------)
	I1028 11:54:52.736277   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:54:52.736290   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:54:52.736342   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:54:52.736362   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:54:52.736375   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.736394   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:54:52.736406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:54:52.736415   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:54:52.736428   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:54:52.736436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home
	I1028 11:54:52.736447   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:54:52.736462   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:52.736473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Skipping /home - not owner
	I1028 11:54:52.737378   95151 main.go:141] libmachine: (ha-273199-m03) define libvirt domain using xml: 
	I1028 11:54:52.737401   95151 main.go:141] libmachine: (ha-273199-m03) <domain type='kvm'>
	I1028 11:54:52.737412   95151 main.go:141] libmachine: (ha-273199-m03)   <name>ha-273199-m03</name>
	I1028 11:54:52.737420   95151 main.go:141] libmachine: (ha-273199-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:54:52.737428   95151 main.go:141] libmachine: (ha-273199-m03)   <vcpu>2</vcpu>
	I1028 11:54:52.737434   95151 main.go:141] libmachine: (ha-273199-m03)   <features>
	I1028 11:54:52.737442   95151 main.go:141] libmachine: (ha-273199-m03)     <acpi/>
	I1028 11:54:52.737451   95151 main.go:141] libmachine: (ha-273199-m03)     <apic/>
	I1028 11:54:52.737465   95151 main.go:141] libmachine: (ha-273199-m03)     <pae/>
	I1028 11:54:52.737475   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737485   95151 main.go:141] libmachine: (ha-273199-m03)   </features>
	I1028 11:54:52.737498   95151 main.go:141] libmachine: (ha-273199-m03)   <cpu mode='host-passthrough'>
	I1028 11:54:52.737507   95151 main.go:141] libmachine: (ha-273199-m03)   
	I1028 11:54:52.737512   95151 main.go:141] libmachine: (ha-273199-m03)   </cpu>
	I1028 11:54:52.737516   95151 main.go:141] libmachine: (ha-273199-m03)   <os>
	I1028 11:54:52.737521   95151 main.go:141] libmachine: (ha-273199-m03)     <type>hvm</type>
	I1028 11:54:52.737530   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='cdrom'/>
	I1028 11:54:52.737537   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='hd'/>
	I1028 11:54:52.737549   95151 main.go:141] libmachine: (ha-273199-m03)     <bootmenu enable='no'/>
	I1028 11:54:52.737555   95151 main.go:141] libmachine: (ha-273199-m03)   </os>
	I1028 11:54:52.737566   95151 main.go:141] libmachine: (ha-273199-m03)   <devices>
	I1028 11:54:52.737573   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='cdrom'>
	I1028 11:54:52.737605   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/boot2docker.iso'/>
	I1028 11:54:52.737626   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:54:52.737633   95151 main.go:141] libmachine: (ha-273199-m03)       <readonly/>
	I1028 11:54:52.737643   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737649   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='disk'>
	I1028 11:54:52.737657   95151 main.go:141] libmachine: (ha-273199-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:54:52.737664   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk'/>
	I1028 11:54:52.737674   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:54:52.737679   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737686   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737691   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='mk-ha-273199'/>
	I1028 11:54:52.737697   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737702   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737709   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737714   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='default'/>
	I1028 11:54:52.737721   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737725   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737736   95151 main.go:141] libmachine: (ha-273199-m03)     <serial type='pty'>
	I1028 11:54:52.737741   95151 main.go:141] libmachine: (ha-273199-m03)       <target port='0'/>
	I1028 11:54:52.737750   95151 main.go:141] libmachine: (ha-273199-m03)     </serial>
	I1028 11:54:52.737755   95151 main.go:141] libmachine: (ha-273199-m03)     <console type='pty'>
	I1028 11:54:52.737764   95151 main.go:141] libmachine: (ha-273199-m03)       <target type='serial' port='0'/>
	I1028 11:54:52.737796   95151 main.go:141] libmachine: (ha-273199-m03)     </console>
	I1028 11:54:52.737822   95151 main.go:141] libmachine: (ha-273199-m03)     <rng model='virtio'>
	I1028 11:54:52.737835   95151 main.go:141] libmachine: (ha-273199-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:54:52.737849   95151 main.go:141] libmachine: (ha-273199-m03)     </rng>
	I1028 11:54:52.737862   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737871   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737883   95151 main.go:141] libmachine: (ha-273199-m03)   </devices>
	I1028 11:54:52.737895   95151 main.go:141] libmachine: (ha-273199-m03) </domain>
	I1028 11:54:52.737906   95151 main.go:141] libmachine: (ha-273199-m03) 
	I1028 11:54:52.744674   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:8b:32:6e in network default
	I1028 11:54:52.745255   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring networks are active...
	I1028 11:54:52.745282   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:52.745947   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network default is active
	I1028 11:54:52.746212   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network mk-ha-273199 is active
	I1028 11:54:52.746662   95151 main.go:141] libmachine: (ha-273199-m03) Getting domain xml...
	I1028 11:54:52.747399   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:53.955503   95151 main.go:141] libmachine: (ha-273199-m03) Waiting to get IP...
	I1028 11:54:53.956506   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:53.956900   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:53.956929   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:53.956873   95896 retry.go:31] will retry after 206.527377ms: waiting for machine to come up
	I1028 11:54:54.165229   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.165718   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.165747   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.165667   95896 retry.go:31] will retry after 298.714532ms: waiting for machine to come up
	I1028 11:54:54.466211   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.466648   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.466677   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.466592   95896 retry.go:31] will retry after 313.294403ms: waiting for machine to come up
	I1028 11:54:54.781194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.781751   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.781781   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.781697   95896 retry.go:31] will retry after 490.276773ms: waiting for machine to come up
	I1028 11:54:55.273485   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:55.273980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:55.274010   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:55.273908   95896 retry.go:31] will retry after 747.967363ms: waiting for machine to come up
	I1028 11:54:56.023947   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.024406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.024436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.024354   95896 retry.go:31] will retry after 879.955575ms: waiting for machine to come up
	I1028 11:54:56.905338   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.905786   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.905854   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.905727   95896 retry.go:31] will retry after 900.403526ms: waiting for machine to come up
	I1028 11:54:57.807987   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:57.808508   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:57.808532   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:57.808456   95896 retry.go:31] will retry after 915.528727ms: waiting for machine to come up
	I1028 11:54:58.725704   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:58.726141   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:58.726171   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:58.726079   95896 retry.go:31] will retry after 1.589094397s: waiting for machine to come up
	I1028 11:55:00.316739   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:00.317159   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:00.317192   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:00.317103   95896 retry.go:31] will retry after 2.113867198s: waiting for machine to come up
	I1028 11:55:02.432898   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:02.433399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:02.433425   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:02.433344   95896 retry.go:31] will retry after 2.28050393s: waiting for machine to come up
	I1028 11:55:04.716742   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:04.717181   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:04.717204   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:04.717143   95896 retry.go:31] will retry after 2.249398536s: waiting for machine to come up
	I1028 11:55:06.969577   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:06.970058   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:06.970080   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:06.970033   95896 retry.go:31] will retry after 2.958136846s: waiting for machine to come up
	I1028 11:55:09.929637   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:09.930041   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:09.930070   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:09.929982   95896 retry.go:31] will retry after 4.070894756s: waiting for machine to come up
	I1028 11:55:14.002837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003301   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has current primary IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003323   95151 main.go:141] libmachine: (ha-273199-m03) Found IP for machine: 192.168.39.14
	I1028 11:55:14.003336   95151 main.go:141] libmachine: (ha-273199-m03) Reserving static IP address...
	I1028 11:55:14.003697   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "ha-273199-m03", mac: "52:54:00:46:1d:e9", ip: "192.168.39.14"} in network mk-ha-273199
	I1028 11:55:14.078161   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:14.078198   95151 main.go:141] libmachine: (ha-273199-m03) Reserved static IP address: 192.168.39.14
	I1028 11:55:14.078221   95151 main.go:141] libmachine: (ha-273199-m03) Waiting for SSH to be available...
	I1028 11:55:14.080426   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.080837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199
	I1028 11:55:14.080864   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find defined IP address of network mk-ha-273199 interface with MAC address 52:54:00:46:1d:e9
	I1028 11:55:14.080998   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:14.081020   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:14.081088   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:14.081126   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:14.081172   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:14.084960   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 11:55:14.084981   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 11:55:14.084988   95151 main.go:141] libmachine: (ha-273199-m03) DBG | command : exit 0
	I1028 11:55:14.084993   95151 main.go:141] libmachine: (ha-273199-m03) DBG | err     : exit status 255
	I1028 11:55:14.084999   95151 main.go:141] libmachine: (ha-273199-m03) DBG | output  : 
	I1028 11:55:17.085220   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:17.087584   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.087980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.088014   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.088124   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:17.088151   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:17.088186   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:17.088203   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:17.088242   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:17.219250   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:55:17.219518   95151 main.go:141] libmachine: (ha-273199-m03) KVM machine creation complete!
	I1028 11:55:17.219876   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:17.220483   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220685   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220845   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:55:17.220861   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetState
	I1028 11:55:17.222309   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:55:17.222328   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:55:17.222335   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:55:17.222343   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.224588   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.224925   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.224952   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.225089   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.225238   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225410   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225535   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.225685   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.225933   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.225948   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:55:17.334782   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.334812   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:55:17.334821   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.337833   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338269   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.338297   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338479   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.338845   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339007   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339176   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.339341   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.339539   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.339557   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:55:17.451978   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:55:17.452046   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:55:17.452059   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:55:17.452070   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452277   95151 buildroot.go:166] provisioning hostname "ha-273199-m03"
	I1028 11:55:17.452288   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452476   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.455103   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455535   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.455562   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455708   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.455867   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.455984   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.456067   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.456198   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.456408   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.456424   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m03 && echo "ha-273199-m03" | sudo tee /etc/hostname
	I1028 11:55:17.580666   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m03
	
	I1028 11:55:17.580700   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.583194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583511   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.583528   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583802   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.584016   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584194   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584336   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.584491   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.584694   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.584718   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:55:17.704448   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.704483   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:55:17.704502   95151 buildroot.go:174] setting up certificates
	I1028 11:55:17.704515   95151 provision.go:84] configureAuth start
	I1028 11:55:17.704525   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.704814   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:17.707324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707661   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.707690   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707847   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.710530   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710812   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.710834   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710987   95151 provision.go:143] copyHostCerts
	I1028 11:55:17.711016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711055   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:55:17.711067   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711144   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:55:17.711240   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711266   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:55:17.711274   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711309   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:55:17.711375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711397   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:55:17.711406   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711441   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:55:17.711512   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m03 san=[127.0.0.1 192.168.39.14 ha-273199-m03 localhost minikube]
	I1028 11:55:17.872732   95151 provision.go:177] copyRemoteCerts
	I1028 11:55:17.872791   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:55:17.872822   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.875766   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876231   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.876275   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876474   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.876674   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.876862   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.877007   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:17.961016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:55:17.961081   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:55:17.984138   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:55:17.984226   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:55:18.008131   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:55:18.008227   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:55:18.031369   95151 provision.go:87] duration metric: took 326.838997ms to configureAuth
	I1028 11:55:18.031405   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:55:18.031687   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:18.031768   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.034245   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034499   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.034512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034834   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.035030   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035212   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035366   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.035511   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.035733   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.035755   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:55:18.272929   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:55:18.272957   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:55:18.272965   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetURL
	I1028 11:55:18.274324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using libvirt version 6000000
	I1028 11:55:18.276917   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277260   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.277286   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277469   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:55:18.277495   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:55:18.277503   95151 client.go:171] duration metric: took 25.997791015s to LocalClient.Create
	I1028 11:55:18.277533   95151 start.go:167] duration metric: took 25.997864783s to libmachine.API.Create "ha-273199"
	I1028 11:55:18.277545   95151 start.go:293] postStartSetup for "ha-273199-m03" (driver="kvm2")
	I1028 11:55:18.277554   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:55:18.277570   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.277772   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:55:18.277797   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.280107   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.280500   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280672   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.280818   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.280972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.281096   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.364949   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:55:18.368679   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:55:18.368702   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:55:18.368765   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:55:18.368831   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:55:18.368841   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:55:18.368936   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:55:18.377576   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:18.398595   95151 start.go:296] duration metric: took 121.036125ms for postStartSetup
	I1028 11:55:18.398663   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:18.399226   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.401512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.401817   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.401845   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.402086   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:55:18.402271   95151 start.go:128] duration metric: took 26.1416351s to createHost
	I1028 11:55:18.402293   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.404399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404785   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.404814   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.405120   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405233   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405349   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.405479   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.405697   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.405707   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:55:18.516101   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116518.496273878
	
	I1028 11:55:18.516127   95151 fix.go:216] guest clock: 1730116518.496273878
	I1028 11:55:18.516135   95151 fix.go:229] Guest: 2024-10-28 11:55:18.496273878 +0000 UTC Remote: 2024-10-28 11:55:18.402282303 +0000 UTC m=+140.534554028 (delta=93.991575ms)
	I1028 11:55:18.516153   95151 fix.go:200] guest clock delta is within tolerance: 93.991575ms
	I1028 11:55:18.516160   95151 start.go:83] releasing machines lock for "ha-273199-m03", held for 26.255640766s
	I1028 11:55:18.516185   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.516440   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.519412   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.519815   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.519848   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.524337   95151 out.go:177] * Found network options:
	I1028 11:55:18.525743   95151 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.225
	W1028 11:55:18.527126   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.527158   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.527179   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527726   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527918   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.528047   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:55:18.528091   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	W1028 11:55:18.528116   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.528141   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.528213   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:55:18.528236   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.531068   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531433   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531460   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531507   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531598   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.531771   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.531976   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531993   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532001   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.532119   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.532160   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.532259   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.532384   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532522   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.778405   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:55:18.783655   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:55:18.783756   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:55:18.797677   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:55:18.797700   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:55:18.797761   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:55:18.814061   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:55:18.825773   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:55:18.825825   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:55:18.837935   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:55:18.849554   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:55:18.965481   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:55:19.099249   95151 docker.go:233] disabling docker service ...
	I1028 11:55:19.099323   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:55:19.113114   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:55:19.124849   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:55:19.250769   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:55:19.359879   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:55:19.373349   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:55:19.389521   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:55:19.389615   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.398854   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:55:19.398906   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.407802   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.417192   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.427164   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:55:19.436640   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.445835   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.462270   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.471609   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:55:19.480345   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:55:19.480383   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:55:19.492803   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:55:19.501227   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:19.617782   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:55:19.703544   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:55:19.703660   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:55:19.708269   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:55:19.708326   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:55:19.712086   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:55:19.749930   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:55:19.750010   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.775811   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.801952   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:55:19.803114   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:55:19.804273   95151 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.225
	I1028 11:55:19.805417   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:19.808218   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808625   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:19.808655   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808919   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:55:19.812627   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:19.824073   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:55:19.824319   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:19.824582   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.824620   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.838910   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I1028 11:55:19.839306   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.839763   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.839782   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.840142   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.840307   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:55:19.841569   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:19.841856   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.841897   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.855881   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1028 11:55:19.856375   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.856826   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.856843   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.857163   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.857327   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:19.857467   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.14
	I1028 11:55:19.857480   95151 certs.go:194] generating shared ca certs ...
	I1028 11:55:19.857496   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.857646   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:55:19.857702   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:55:19.857720   95151 certs.go:256] generating profile certs ...
	I1028 11:55:19.857827   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:55:19.857863   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7
	I1028 11:55:19.857891   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 11:55:19.946624   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 ...
	I1028 11:55:19.946653   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7: {Name:mk3236f0712e0310e6a0f8a3941b2eeadd0570c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946816   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 ...
	I1028 11:55:19.946829   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7: {Name:mka0c613afe4278aca8a4ff26ddba521c4e341b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946908   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:55:19.947042   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:55:19.947166   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:55:19.947182   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:55:19.947196   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:55:19.947208   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:55:19.947221   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:55:19.947233   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:55:19.947245   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:55:19.947256   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:55:19.967716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:55:19.967802   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:55:19.967847   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:55:19.967864   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:55:19.967899   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:55:19.967933   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:55:19.967965   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:55:19.968019   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:19.968051   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:55:19.968066   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:55:19.968076   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:19.968113   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:19.971063   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971502   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:19.971527   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971715   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:19.971902   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:19.972073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:19.972212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:20.047980   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:55:20.052462   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:55:20.063257   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:55:20.067603   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:55:20.083360   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:55:20.087209   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:55:20.096958   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:55:20.100595   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:55:20.113829   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:55:20.117648   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:55:20.126859   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:55:20.130471   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:55:20.139759   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:55:20.167843   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:55:20.191233   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:55:20.214438   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:55:20.235571   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:55:20.261436   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:55:20.285034   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:55:20.310624   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:55:20.332555   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:55:20.354176   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:55:20.374974   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:55:20.396001   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:55:20.411032   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:55:20.426186   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:55:20.441112   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:55:20.456730   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:55:20.472441   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:55:20.488012   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:55:20.502635   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:55:20.508164   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:55:20.519601   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523711   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523777   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.529016   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:55:20.538537   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:55:20.548100   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552319   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552375   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.557900   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:55:20.567792   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:55:20.577338   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581264   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581323   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.586529   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:55:20.596428   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:55:20.600115   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:55:20.600167   95151 kubeadm.go:934] updating node {m03 192.168.39.14 8443 v1.31.2 crio true true} ...
	I1028 11:55:20.600258   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:55:20.600291   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:55:20.600325   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:55:20.616989   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:55:20.617099   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:55:20.617151   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.626357   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:55:20.626409   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.634842   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:55:20.634876   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634922   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:55:20.634942   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634948   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.634853   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:55:20.635007   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.635050   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:55:20.638692   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:55:20.638722   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:55:20.663836   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.663872   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:55:20.663905   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:55:20.663970   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.699827   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:55:20.699877   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:55:21.384145   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:55:21.393997   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:55:21.409884   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:55:21.425811   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:55:21.441992   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:55:21.445803   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:21.457453   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:21.579499   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:21.596582   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:21.597031   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:21.597081   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:21.612568   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1028 11:55:21.613014   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:21.613608   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:21.613636   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:21.613983   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:21.614133   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:21.614251   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:55:21.614418   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:55:21.614445   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:21.617174   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617565   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:21.617589   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617762   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:21.617923   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:21.618054   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:21.618200   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:21.766904   95151 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:21.766967   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I1028 11:55:42.707746   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (20.940747813s)
	I1028 11:55:42.707786   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:55:43.259520   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m03 minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:55:43.364349   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:55:43.486876   95151 start.go:319] duration metric: took 21.872622243s to joinCluster
	I1028 11:55:43.486974   95151 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:43.487346   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:43.488385   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:55:43.489624   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:43.714323   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:43.797310   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:55:43.797585   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:55:43.797659   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:55:43.797894   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:55:43.797978   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:43.797989   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:43.797999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:43.798002   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:43.801478   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.298184   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.298206   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.298216   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.298222   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.301984   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.798900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.798925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.798933   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.798937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.802625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.298286   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.298308   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.298316   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.298323   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.301749   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.798575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.798599   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.798606   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.798609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.801730   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.802260   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:46.298797   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.298831   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.298843   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.298848   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.301856   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:46.798975   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.798994   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.799003   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.799009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.802334   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.298943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.298969   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.298981   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.298987   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.302012   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.799134   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.799156   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.799164   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.799170   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.802967   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.803491   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:48.298732   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.298760   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.298772   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.298778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.302148   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:48.799142   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.799170   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.799182   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.799190   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.802961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.298717   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.298741   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.298752   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.298759   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.302024   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.798693   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.798713   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.798721   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.798726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.832585   95151 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1028 11:55:49.833180   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:50.298166   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.298188   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.298197   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.298201   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.301302   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:50.798073   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.798095   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.798104   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.798108   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.803748   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:55:51.298872   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.298899   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.298910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.298913   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.301397   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:51.798388   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.798420   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.798428   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.798434   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.801659   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:52.298527   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.298549   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.298561   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.298565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.301585   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:52.302112   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:52.798187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.798212   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.798223   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.798228   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.801528   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.298514   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.298542   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.298550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.298554   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.301689   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.798539   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.798559   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.798574   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.798578   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.801491   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:54.298293   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.298317   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.298325   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.298330   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.302064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:54.302719   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:54.798749   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.798769   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.798778   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.798783   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.801841   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.298678   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.298701   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.298712   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.298716   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.302094   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.798085   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.798105   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.798113   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.798116   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.800935   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:56.298920   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.298949   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.298958   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.298962   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.302100   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.798358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.798381   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.798390   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.798394   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.801648   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.802259   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:57.298900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.298925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.298937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.298943   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.301768   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:57.798111   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.798136   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.798148   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.798154   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.802245   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:55:58.299121   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.299149   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.299162   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.299171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.302703   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:58.798590   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.798615   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.798628   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.798634   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.801208   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:59.299008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.299036   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.299047   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.299054   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.302735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:59.303420   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:59.798874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.798896   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.798903   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.798907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.802046   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.298533   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.298555   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.298562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.298567   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.301628   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.798592   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.798612   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.798619   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.798623   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.801213   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.298108   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.298133   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.298143   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.298148   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301184   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.301784   95151 node_ready.go:49] node "ha-273199-m03" has status "Ready":"True"
	I1028 11:56:01.301805   95151 node_ready.go:38] duration metric: took 17.503895303s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:56:01.301814   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:01.301887   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:01.301896   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.301903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301911   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.308580   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:01.316771   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.316873   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:56:01.316885   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.316900   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.316907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.320308   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.320987   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.321003   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.321013   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.321019   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.323787   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.324347   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.324365   95151 pod_ready.go:82] duration metric: took 7.565058ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324373   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324419   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:56:01.324427   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.324433   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.324439   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.326735   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.327335   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.327355   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.327365   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.327373   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.329530   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.330057   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.330074   95151 pod_ready.go:82] duration metric: took 5.693547ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330086   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330136   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:56:01.330146   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.330155   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.330165   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.332526   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.332999   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.333016   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.333027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.333032   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.334989   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.335422   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.335440   95151 pod_ready.go:82] duration metric: took 5.348301ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335448   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335488   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:56:01.335496   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.335502   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.335506   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.337739   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.338582   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:01.338597   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.338604   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.338609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.340562   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.341152   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.341169   95151 pod_ready.go:82] duration metric: took 5.715551ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.341177   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.498553   95151 request.go:632] Waited for 157.309109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498638   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498650   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.498660   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.498665   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.501894   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.699071   95151 request.go:632] Waited for 196.385515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699155   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699161   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.699169   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.699174   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.702324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.702894   95151 pod_ready.go:93] pod "etcd-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.702916   95151 pod_ready.go:82] duration metric: took 361.733856ms for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.702934   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.898705   95151 request.go:632] Waited for 195.691939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898957   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898985   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.898999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.899009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.902374   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.098254   95151 request.go:632] Waited for 195.287162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098328   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098335   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.098347   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.098353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.101196   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.101738   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.101763   95151 pod_ready.go:82] duration metric: took 398.820372ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.101781   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.298212   95151 request.go:632] Waited for 196.275952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298275   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298281   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.298290   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.298301   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.301860   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.499036   95151 request.go:632] Waited for 196.376254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499126   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499138   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.499147   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.499155   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.502306   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.502777   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.502797   95151 pod_ready.go:82] duration metric: took 401.004802ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.502809   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.698962   95151 request.go:632] Waited for 196.058055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699040   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699049   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.699060   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.699069   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.702304   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.898265   95151 request.go:632] Waited for 195.32967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898332   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898337   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.898346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.898349   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.901285   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.901755   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.901774   95151 pod_ready.go:82] duration metric: took 398.957477ms for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.901786   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.098215   95151 request.go:632] Waited for 196.338003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098302   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098312   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.098326   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.098336   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.101391   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.299109   95151 request.go:632] Waited for 197.052748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299198   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.299211   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.299219   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.302429   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.303124   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.303143   95151 pod_ready.go:82] duration metric: took 401.346731ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.303154   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.499186   95151 request.go:632] Waited for 195.929738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499255   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499260   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.499268   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.499283   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.502463   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.698544   95151 request.go:632] Waited for 195.349647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698622   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698627   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.698635   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.698642   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.701741   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.702403   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.702426   95151 pod_ready.go:82] duration metric: took 399.264829ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.702441   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.898913   95151 request.go:632] Waited for 196.399022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899002   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899011   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.899023   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.899029   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.902056   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.099025   95151 request.go:632] Waited for 196.30082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099105   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099116   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.099127   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.099137   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.102284   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.102800   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.102822   95151 pod_ready.go:82] duration metric: took 400.371733ms for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.102837   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.299058   95151 request.go:632] Waited for 196.137259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299139   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299144   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.299153   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.299157   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.302746   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.499079   95151 request.go:632] Waited for 195.393701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499163   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499171   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.499185   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.499195   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.503387   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:56:04.504037   95151 pod_ready.go:93] pod "kube-proxy-9g4h7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.504061   95151 pod_ready.go:82] duration metric: took 401.216048ms for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.504076   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.698976   95151 request.go:632] Waited for 194.814472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699071   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.699079   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.699084   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.702055   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:04.898609   95151 request.go:632] Waited for 195.739677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898675   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898683   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.898693   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.898700   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.901923   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.902584   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.902605   95151 pod_ready.go:82] duration metric: took 398.518978ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.902614   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.098688   95151 request.go:632] Waited for 195.978821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098754   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098759   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.098768   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.098778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.102003   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.298290   95151 request.go:632] Waited for 195.293864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298361   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298369   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.298380   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.298386   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.301816   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.302344   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.302364   95151 pod_ready.go:82] duration metric: took 399.743307ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.302375   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.498499   95151 request.go:632] Waited for 196.032121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498559   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498565   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.498572   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.498584   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.501658   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.698555   95151 request.go:632] Waited for 196.349621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698639   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.698659   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.698670   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.701856   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.702478   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.702502   95151 pod_ready.go:82] duration metric: took 400.117869ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.702516   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.898432   95151 request.go:632] Waited for 195.801686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898504   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898512   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.898523   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.898535   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.901090   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:06.099148   95151 request.go:632] Waited for 197.39166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099243   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099256   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.099266   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.099273   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.102573   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.103298   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.103317   95151 pod_ready.go:82] duration metric: took 400.794152ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.103328   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.298494   95151 request.go:632] Waited for 195.077295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298597   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298623   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.298634   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.298639   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.301973   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.499177   95151 request.go:632] Waited for 196.369372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499245   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499253   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.499263   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.499271   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.503129   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.503622   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.503653   95151 pod_ready.go:82] duration metric: took 400.317222ms for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.503666   95151 pod_ready.go:39] duration metric: took 5.2018361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:06.503683   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:56:06.503735   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:56:06.519167   95151 api_server.go:72] duration metric: took 23.032149937s to wait for apiserver process to appear ...
	I1028 11:56:06.519193   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:56:06.519218   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:56:06.524148   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:56:06.524235   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:56:06.524247   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.524259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.524269   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.525138   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:56:06.525206   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:56:06.525222   95151 api_server.go:131] duration metric: took 6.021057ms to wait for apiserver health ...
	I1028 11:56:06.525232   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:56:06.698920   95151 request.go:632] Waited for 173.589854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699014   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699026   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.699037   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.699046   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.705719   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:06.711799   95151 system_pods.go:59] 24 kube-system pods found
	I1028 11:56:06.711826   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:06.711831   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:06.711834   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:06.711837   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:06.711840   95151 system_pods.go:61] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:06.711845   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:06.711849   95151 system_pods.go:61] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:06.711858   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:06.711864   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:06.711869   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:06.711877   95151 system_pods.go:61] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:06.711884   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:06.711893   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:06.711901   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:06.711906   95151 system_pods.go:61] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:06.711911   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:06.711917   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:06.711923   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:06.711926   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:06.711932   95151 system_pods.go:61] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:06.711935   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:06.711940   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:06.711943   95151 system_pods.go:61] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:06.711947   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:06.711955   95151 system_pods.go:74] duration metric: took 186.713107ms to wait for pod list to return data ...
	I1028 11:56:06.711967   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:56:06.899177   95151 request.go:632] Waited for 187.113111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899242   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.899250   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.899255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.902353   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.902463   95151 default_sa.go:45] found service account: "default"
	I1028 11:56:06.902477   95151 default_sa.go:55] duration metric: took 190.499796ms for default service account to be created ...
	I1028 11:56:06.902489   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:56:07.098925   95151 request.go:632] Waited for 196.358925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099006   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099015   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.099027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.099034   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.104802   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:56:07.111244   95151 system_pods.go:86] 24 kube-system pods found
	I1028 11:56:07.111271   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:07.111276   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:07.111280   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:07.111284   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:07.111287   95151 system_pods.go:89] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:07.111292   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:07.111296   95151 system_pods.go:89] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:07.111301   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:07.111306   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:07.111312   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:07.111320   95151 system_pods.go:89] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:07.111326   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:07.111336   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:07.111342   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:07.111348   95151 system_pods.go:89] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:07.111354   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:07.111358   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:07.111364   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:07.111368   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:07.111374   95151 system_pods.go:89] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:07.111377   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:07.111386   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:07.111391   95151 system_pods.go:89] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:07.111394   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:07.111402   95151 system_pods.go:126] duration metric: took 208.905709ms to wait for k8s-apps to be running ...
	I1028 11:56:07.111413   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:56:07.111468   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:56:07.126987   95151 system_svc.go:56] duration metric: took 15.565787ms WaitForService to wait for kubelet
	I1028 11:56:07.127011   95151 kubeadm.go:582] duration metric: took 23.639999996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:56:07.127031   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:56:07.298754   95151 request.go:632] Waited for 171.640481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298832   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298839   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.298848   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.298857   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.302715   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:07.303776   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303797   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303807   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303810   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303814   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303817   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303821   95151 node_conditions.go:105] duration metric: took 176.784967ms to run NodePressure ...
	I1028 11:56:07.303834   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:56:07.303857   95151 start.go:255] writing updated cluster config ...
	I1028 11:56:07.304142   95151 ssh_runner.go:195] Run: rm -f paused
	I1028 11:56:07.355822   95151 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:56:07.357678   95151 out.go:177] * Done! kubectl is now configured to use "ha-273199" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.271927023Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fnvwg,Uid:7e89846f-39f0-42a4-b343-0ae004376bc7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116568595326394,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:56:08.271095605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7e8f1437-aa9b-4d11-a516-f545f55e271c,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1730116437166402002,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T11:53:56.836966681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc26g,Uid:352843f5-74ea-4f39-9b5e-8a14206f5ef6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116437152514863,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74ea-4f39-9b5e-8a14206f5ef6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.837780003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rnn9,Uid:6addf18c-48d4-4b46-9695-d3c73f66dcf7,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730116437137041444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.826411741Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tr5vf,Uid:1523079e-d7eb-432d-8023-83ac95c1c853,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424827712969,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-28T11:53:43.016311556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&PodSandboxMetadata{Name:kindnet-2gldl,Uid:669d86dc-15f1-4cda-9f16-6ebfabad12ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424826468891,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:43.020213220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-273199,Uid:ec1fb61a398f082d62933fd99a5e91c8,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1730116411862344870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{kubernetes.io/config.hash: ec1fb61a398f082d62933fd99a5e91c8,kubernetes.io/config.seen: 2024-10-28T11:53:31.392312295Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-273199,Uid:2afa0eef601ae02df3405cd2d523046c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411860656774,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2afa
0eef601ae02df3405cd2d523046c,kubernetes.io/config.seen: 2024-10-28T11:53:31.392311542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-273199,Uid:de3f68a446dbf81588ffdebc94e65e05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411858786132,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de3f68a446dbf81588ffdebc94e65e05,kubernetes.io/config.seen: 2024-10-28T11:53:31.392310435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-273199,Ui
d:67aa1fe51ef7e2d6640194db4db476a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411847852262,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 67aa1fe51ef7e2d6640194db4db476a0,kubernetes.io/config.seen: 2024-10-28T11:53:31.392309218Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&PodSandboxMetadata{Name:etcd-ha-273199,Uid:af5894cc6d394a4575ef924f31654a84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411838769279,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-273199,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: af5894cc6d394a4575ef924f31654a84,kubernetes.io/config.seen: 2024-10-28T11:53:31.392305945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=960e7037-b2e4-46eb-8101-a18c749c3bf2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.272673941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2a7b186-4acd-4379-989f-8f3fadc38f18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.272733315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2a7b186-4acd-4379-989f-8f3fadc38f18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.272979955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2a7b186-4acd-4379-989f-8f3fadc38f18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.276197193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51694f77-1b5c-4a22-adb0-300575e60afa name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.276275567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51694f77-1b5c-4a22-adb0-300575e60afa name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.277208023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4795e235-3037-44f4-8f32-7d85b567ec2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.278252590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116782278226967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4795e235-3037-44f4-8f32-7d85b567ec2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.280958615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da79f89f-35e5-4f56-9f03-f8f7ecd713e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.281058252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da79f89f-35e5-4f56-9f03-f8f7ecd713e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.281260361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da79f89f-35e5-4f56-9f03-f8f7ecd713e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.315952748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9b64583-c565-47b3-9000-60e421699d4e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.316075114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9b64583-c565-47b3-9000-60e421699d4e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.317505295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64efff8c-938d-41b8-9117-823ba764584c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.318083222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116782318060082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64efff8c-938d-41b8-9117-823ba764584c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.318565188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f03298f2-ab8c-45cc-ab67-eaa59db9a34b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.318645150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f03298f2-ab8c-45cc-ab67-eaa59db9a34b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.318864392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f03298f2-ab8c-45cc-ab67-eaa59db9a34b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.353939262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd021d9a-5433-4caa-a904-cde12514170e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.354053010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd021d9a-5433-4caa-a904-cde12514170e name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.355252808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa56bab3-0876-419b-90e4-fe479c776fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.355678614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116782355657034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa56bab3-0876-419b-90e4-fe479c776fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.356251985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12e2c7ca-c65e-4bd5-afda-ad0f38819a2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.356325137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12e2c7ca-c65e-4bd5-afda-ad0f38819a2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:42 ha-273199 crio[663]: time="2024-10-28 11:59:42.356551077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12e2c7ca-c65e-4bd5-afda-ad0f38819a2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	609ad54d4add2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5aab280940ba8       busybox-7dff88458-fnvwg
	fe58f2eaad87a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   257fc926b128d       coredns-7c65d6cfc9-hc26g
	74749e3632776       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a33a6d6dc5f66       coredns-7c65d6cfc9-7rnn9
	72c80fedf6643       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   53cd5c1c15675       storage-provisioner
	e082051f544c2       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      5 minutes ago       Running             kindnet-cni               0                   ef059ce23254d       kindnet-2gldl
	82471ae5ddf92       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      5 minutes ago       Running             kube-proxy                0                   0cbf13a852cd2       kube-proxy-tr5vf
	39409b2e85012       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   cc7ea362731d6       kube-vip-ha-273199
	8b350f0da3b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   43ab783eb9151       kube-apiserver-ha-273199
	07773cb979d8f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2541db65f40ae       kube-controller-manager-ha-273199
	6fb4822a5b791       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   737b1cd7f74b4       kube-scheduler-ha-273199
	ec2df51593c58       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   32e3db6238d43       etcd-ha-273199
	
	
	==> coredns [74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d] <==
	[INFO] 10.244.1.2:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227007s
	[INFO] 10.244.1.2:38770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002925427s
	[INFO] 10.244.1.2:48927 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147448s
	[INFO] 10.244.1.2:38077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192376s
	[INFO] 10.244.0.4:54968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.0.4:57503 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110201s
	[INFO] 10.244.0.4:34291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061267s
	[INFO] 10.244.0.4:50921 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128077s
	[INFO] 10.244.0.4:39917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062677s
	[INFO] 10.244.2.2:60183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014203s
	[INFO] 10.244.2.2:40291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692422s
	[INFO] 10.244.2.2:46423 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149349s
	[INFO] 10.244.2.2:54634 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124106s
	[INFO] 10.244.1.2:50363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142769s
	[INFO] 10.244.1.2:35968 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225253s
	[INFO] 10.244.1.2:45996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107605s
	[INFO] 10.244.1.2:49921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093269s
	[INFO] 10.244.0.4:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012322s
	[INFO] 10.244.2.2:52722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002033s
	[INFO] 10.244.2.2:57825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011394s
	[INFO] 10.244.1.2:34495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211997s
	[INFO] 10.244.1.2:44656 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288144s
	[INFO] 10.244.0.4:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021258s
	[INFO] 10.244.2.2:60661 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153264s
	[INFO] 10.244.2.2:45534 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088052s
	
	
	==> coredns [fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce] <==
	[INFO] 10.244.0.4:38250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001327706s
	[INFO] 10.244.0.4:43351 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111923s
	[INFO] 10.244.0.4:51500 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001177333s
	[INFO] 10.244.2.2:48939 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124212s
	[INFO] 10.244.2.2:50808 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000124833s
	[INFO] 10.244.1.2:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190204s
	[INFO] 10.244.0.4:58247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672481s
	[INFO] 10.244.0.4:37091 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169137s
	[INFO] 10.244.0.4:48641 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098052s
	[INFO] 10.244.2.2:54836 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104545s
	[INFO] 10.244.2.2:40126 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854336s
	[INFO] 10.244.2.2:52894 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163896s
	[INFO] 10.244.2.2:35333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230414s
	[INFO] 10.244.0.4:41974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152869s
	[INFO] 10.244.0.4:36380 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062783s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048517s
	[INFO] 10.244.2.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018024s
	[INFO] 10.244.2.2:38193 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125455s
	[INFO] 10.244.1.2:33651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271979s
	[INFO] 10.244.1.2:35705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159131s
	[INFO] 10.244.0.4:48176 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111737s
	[INFO] 10.244.0.4:38598 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127464s
	[INFO] 10.244.0.4:32940 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141046s
	[INFO] 10.244.2.2:43181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212895s
	[INFO] 10.244.2.2:43421 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090558s
	
	
	==> describe nodes <==
	Name:               ha-273199
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:53:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-273199
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c1c6593d854f8388a3b75213b790ab
	  System UUID:                c4c1c659-3d85-4f83-88a3-b75213b790ab
	  Boot ID:                    1bfb0ff9-0991-4c08-97cb-b1b218815106
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-7c65d6cfc9-7rnn9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m59s
	  kube-system                 coredns-7c65d6cfc9-hc26g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m59s
	  kube-system                 etcd-ha-273199                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m4s
	  kube-system                 kindnet-2gldl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m
	  kube-system                 kube-apiserver-ha-273199             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-controller-manager-ha-273199    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-proxy-tr5vf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-scheduler-ha-273199             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-vip-ha-273199                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m11s (x7 over 6m11s)  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m4s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m4s                   kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s                   kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s                   kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m                     node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  NodeReady                5m46s                  kubelet          Node ha-273199 status is now: NodeReady
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	
	
	Name:               ha-273199-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:54:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:57:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-273199-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d185c9b1be043df924a5dc234d517bb
	  System UUID:                2d185c9b-1be0-43df-924a-5dc234d517bb
	  Boot ID:                    707068c3-7da2-4705-9622-6b089ce29c40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8tvkk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-273199-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m14s
	  kube-system                 kindnet-ts2mp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m16s
	  kube-system                 kube-apiserver-ha-273199-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-controller-manager-ha-273199-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-nrzn7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-scheduler-ha-273199-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-273199-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-273199-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-273199-m02 status is now: NodeNotReady
	
	
	Name:               ha-273199-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:55:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-273199-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d112805c85f46e58297ecf352114eb9
	  System UUID:                1d112805-c85f-46e5-8297-ecf352114eb9
	  Boot ID:                    07c61f8b-a2c4-4310-b7a1-41ac039bba9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g54mk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-273199-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m
	  kube-system                 kindnet-rz4mf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-apiserver-ha-273199-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-controller-manager-ha-273199-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-proxy-9g4h7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-ha-273199-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-vip-ha-273199-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node ha-273199-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m2s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           3m54s                node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	
	
	Name:               ha-273199-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_56_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:57:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-273199-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43b84cefa5dd4131ade4071e67ae7a87
	  System UUID:                43b84cef-a5dd-4131-ade4-071e67ae7a87
	  Boot ID:                    bfbeda91-dd05-4597-adc6-b479c1c2dd66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bx2hn       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-7pzm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-273199-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-273199-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.737052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.789015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.644647] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.122482] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.184258] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.115821] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.235503] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.601274] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.514017] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.057056] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251877] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.071885] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.801233] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.354632] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 11:54] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3] <==
	{"level":"warn","ts":"2024-10-28T11:59:42.620947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.627477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.631351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.639382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.651537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.663058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.667806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.670942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.678255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.686114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.692152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.695594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.698150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.698748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.703762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.710807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.717192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.721252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.724443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.728241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.736128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.742253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.768202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.770232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:42.799659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:59:42 up 6 min,  0 users,  load average: 0.38, 0.35, 0.18
	Linux ha-273199 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9] <==
	I1028 11:59:06.530779       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:16.530156       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:16.530273       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:16.530505       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:16.530534       1 main.go:300] handling current node
	I1028 11:59:16.530562       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:16.530579       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:16.530764       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:16.530799       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530030       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:26.530150       1 main.go:300] handling current node
	I1028 11:59:26.530184       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:26.530202       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:26.530461       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:26.530495       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530632       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:26.530655       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:36.531055       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:36.531126       1 main.go:300] handling current node
	I1028 11:59:36.531149       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:36.531155       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:36.531406       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:36.531425       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:36.531556       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:36.531571       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56] <==
	I1028 11:53:37.479954       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:53:38.366724       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:53:38.396043       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:53:38.413224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:53:42.979540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:53:43.083644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:55:40.973661       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.973734       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.741µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:55:40.974882       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.976075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.977370       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.890629ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1028 11:56:12.749438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33980: use of closed network connection
	E1028 11:56:12.923851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33996: use of closed network connection
	E1028 11:56:13.281780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34038: use of closed network connection
	E1028 11:56:13.456851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34054: use of closed network connection
	E1028 11:56:13.625829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34076: use of closed network connection
	E1028 11:56:13.792266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34090: use of closed network connection
	E1028 11:56:13.965533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34100: use of closed network connection
	E1028 11:56:14.136211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34124: use of closed network connection
	E1028 11:56:14.414608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34162: use of closed network connection
	E1028 11:56:14.591367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34188: use of closed network connection
	E1028 11:56:14.760347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34200: use of closed network connection
	E1028 11:56:14.922486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34206: use of closed network connection
	E1028 11:56:15.092625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34220: use of closed network connection
	E1028 11:56:15.260557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34244: use of closed network connection
	
	
	==> kube-controller-manager [07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df] <==
	I1028 11:56:41.255363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.287882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.504368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.718228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m03"
	I1028 11:56:41.866442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.227080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-273199-m04"
	I1028 11:56:42.253788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.533477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199"
	I1028 11:56:43.703600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:43.733191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.386515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.495725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:51.380862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:57:01.650243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:02.239477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:12.162277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:58:02.262145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.262722       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:58:02.289111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.371759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.617397ms"
	I1028 11:58:02.371873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.712µs"
	I1028 11:58:03.751638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:07.489074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	
	
	==> kube-proxy [82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:53:45.160274       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:53:45.173814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E1028 11:53:45.173942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:53:45.205451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:53:45.205509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:53:45.205540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:53:45.207870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:53:45.208259       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:53:45.208291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:53:45.209606       1 config.go:328] "Starting node config controller"
	I1028 11:53:45.209665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:53:45.210054       1 config.go:199] "Starting service config controller"
	I1028 11:53:45.210078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:53:45.210110       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:53:45.210127       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:53:45.310570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:53:45.310626       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:53:45.310585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c] <==
	I1028 11:53:39.113228       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:55:40.277591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.278684       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 164d41fa-0fff-4f4c-8f09-011e57fc1094(kube-system/kindnet-whfj9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-whfj9"
	E1028 11:55:40.278764       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-whfj9"
	I1028 11:55:40.278832       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.294817       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.294939       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 88c92727-3ef1-4b38-9df5-771fe9917f5e(kube-system/kube-proxy-qxpt8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qxpt8"
	E1028 11:55:40.294972       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-qxpt8"
	I1028 11:55:40.295047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.307670       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.307788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4899b8e5-73ce-487e-81ca-f833a1dc900b(kube-system/kube-proxy-9g4h7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9g4h7"
	E1028 11:55:40.307822       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-9g4h7"
	I1028 11:55:40.307855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.324371       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:40.324469       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6b2fd99-538e-49be-bda5-b0e1c9edb32c(kube-system/kindnet-4bn7m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4bn7m"
	E1028 11:55:40.324505       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-4bn7m"
	I1028 11:55:40.324540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:42.324511       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:55:42.324607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33ad0e92-e29c-4e54-8593-7cffd69fd439(kube-system/kindnet-rz4mf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rz4mf"
	E1028 11:55:42.324641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-rz4mf"
	I1028 11:55:42.324700       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:56:08.295366       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	E1028 11:56:08.295536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e89846f-39f0-42a4-b343-0ae004376bc7(default/busybox-7dff88458-fnvwg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fnvwg"
	E1028 11:56:08.295580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" pod="default/busybox-7dff88458-fnvwg"
	I1028 11:56:08.295605       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	
	
	==> kubelet <==
	Oct 28 11:58:28 ha-273199 kubelet[1304]: E1028 11:58:28.349532    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116708349286872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.290701    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351743    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351767    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353760    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353814    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356841    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356866    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358886    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358944    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.361731    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.362240    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363560    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363977    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.290570    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366212    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366235    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.396516694s)
ha_test.go:415: expected profile "ha-273199" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-273199\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-273199\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-273199\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.208\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.225\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.14\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.29\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.352386047s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m03_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:52:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:52:57.905238   95151 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:57.905348   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905358   95151 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:57.905363   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905525   95151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:57.906087   95151 out.go:352] Setting JSON to false
	I1028 11:52:57.907021   95151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5728,"bootTime":1730110650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:57.907126   95151 start.go:139] virtualization: kvm guest
	I1028 11:52:57.909586   95151 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:57.911228   95151 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:57.911224   95151 notify.go:220] Checking for updates...
	I1028 11:52:57.912881   95151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:57.914463   95151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:57.915977   95151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:57.917406   95151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:57.918858   95151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:57.920382   95151 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:57.956004   95151 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:52:57.957439   95151 start.go:297] selected driver: kvm2
	I1028 11:52:57.957454   95151 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:52:57.957467   95151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:57.958216   95151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.958309   95151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:52:57.973197   95151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:52:57.973244   95151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:52:57.973498   95151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:52:57.973536   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:52:57.973597   95151 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:52:57.973608   95151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:52:57.973673   95151 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:57.973775   95151 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.975793   95151 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 11:52:57.977410   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:52:57.977445   95151 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:52:57.977454   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:52:57.977554   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:52:57.977568   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:52:57.977888   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:52:57.977914   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json: {Name:mk29535b2b544db75ec78b7c2f3618df28a4affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:52:57.978059   95151 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:52:57.978100   95151 start.go:364] duration metric: took 24.255µs to acquireMachinesLock for "ha-273199"
	I1028 11:52:57.978122   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:52:57.978188   95151 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:52:57.980939   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:52:57.981099   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:57.981147   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:57.995094   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I1028 11:52:57.995525   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:57.996093   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:52:57.996110   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:57.996513   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:57.996734   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:52:57.996948   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:52:57.997198   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:52:57.997236   95151 client.go:168] LocalClient.Create starting
	I1028 11:52:57.997293   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:52:57.997346   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997371   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997456   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:52:57.997488   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997509   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997543   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:52:57.997564   95151 main.go:141] libmachine: (ha-273199) Calling .PreCreateCheck
	I1028 11:52:57.998077   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:52:57.998575   95151 main.go:141] libmachine: Creating machine...
	I1028 11:52:57.998591   95151 main.go:141] libmachine: (ha-273199) Calling .Create
	I1028 11:52:57.998762   95151 main.go:141] libmachine: (ha-273199) Creating KVM machine...
	I1028 11:52:58.000213   95151 main.go:141] libmachine: (ha-273199) DBG | found existing default KVM network
	I1028 11:52:58.000923   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.000765   95174 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045e0}
	I1028 11:52:58.000944   95151 main.go:141] libmachine: (ha-273199) DBG | created network xml: 
	I1028 11:52:58.000958   95151 main.go:141] libmachine: (ha-273199) DBG | <network>
	I1028 11:52:58.000965   95151 main.go:141] libmachine: (ha-273199) DBG |   <name>mk-ha-273199</name>
	I1028 11:52:58.000975   95151 main.go:141] libmachine: (ha-273199) DBG |   <dns enable='no'/>
	I1028 11:52:58.000981   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.000999   95151 main.go:141] libmachine: (ha-273199) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:52:58.001012   95151 main.go:141] libmachine: (ha-273199) DBG |     <dhcp>
	I1028 11:52:58.001028   95151 main.go:141] libmachine: (ha-273199) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:52:58.001044   95151 main.go:141] libmachine: (ha-273199) DBG |     </dhcp>
	I1028 11:52:58.001076   95151 main.go:141] libmachine: (ha-273199) DBG |   </ip>
	I1028 11:52:58.001096   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.001107   95151 main.go:141] libmachine: (ha-273199) DBG | </network>
	I1028 11:52:58.001116   95151 main.go:141] libmachine: (ha-273199) DBG | 
	I1028 11:52:58.006306   95151 main.go:141] libmachine: (ha-273199) DBG | trying to create private KVM network mk-ha-273199 192.168.39.0/24...
	I1028 11:52:58.068689   95151 main.go:141] libmachine: (ha-273199) DBG | private KVM network mk-ha-273199 192.168.39.0/24 created
	I1028 11:52:58.068733   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.068675   95174 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.068745   95151 main.go:141] libmachine: (ha-273199) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.068764   95151 main.go:141] libmachine: (ha-273199) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:52:58.068841   95151 main.go:141] libmachine: (ha-273199) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:52:58.350673   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.350525   95174 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa...
	I1028 11:52:58.570859   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570715   95174 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk...
	I1028 11:52:58.570893   95151 main.go:141] libmachine: (ha-273199) DBG | Writing magic tar header
	I1028 11:52:58.570902   95151 main.go:141] libmachine: (ha-273199) DBG | Writing SSH key tar header
	I1028 11:52:58.570910   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570831   95174 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.570926   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199
	I1028 11:52:58.570998   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 (perms=drwx------)
	I1028 11:52:58.571026   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:52:58.571056   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:52:58.571074   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.571082   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:52:58.571094   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:52:58.571102   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:52:58.571107   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home
	I1028 11:52:58.571113   95151 main.go:141] libmachine: (ha-273199) DBG | Skipping /home - not owner
	I1028 11:52:58.571126   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:52:58.571143   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:52:58.571178   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:52:58.571193   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:52:58.571219   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:58.572260   95151 main.go:141] libmachine: (ha-273199) define libvirt domain using xml: 
	I1028 11:52:58.572286   95151 main.go:141] libmachine: (ha-273199) <domain type='kvm'>
	I1028 11:52:58.572294   95151 main.go:141] libmachine: (ha-273199)   <name>ha-273199</name>
	I1028 11:52:58.572299   95151 main.go:141] libmachine: (ha-273199)   <memory unit='MiB'>2200</memory>
	I1028 11:52:58.572304   95151 main.go:141] libmachine: (ha-273199)   <vcpu>2</vcpu>
	I1028 11:52:58.572308   95151 main.go:141] libmachine: (ha-273199)   <features>
	I1028 11:52:58.572313   95151 main.go:141] libmachine: (ha-273199)     <acpi/>
	I1028 11:52:58.572324   95151 main.go:141] libmachine: (ha-273199)     <apic/>
	I1028 11:52:58.572330   95151 main.go:141] libmachine: (ha-273199)     <pae/>
	I1028 11:52:58.572339   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572346   95151 main.go:141] libmachine: (ha-273199)   </features>
	I1028 11:52:58.572356   95151 main.go:141] libmachine: (ha-273199)   <cpu mode='host-passthrough'>
	I1028 11:52:58.572364   95151 main.go:141] libmachine: (ha-273199)   
	I1028 11:52:58.572375   95151 main.go:141] libmachine: (ha-273199)   </cpu>
	I1028 11:52:58.572382   95151 main.go:141] libmachine: (ha-273199)   <os>
	I1028 11:52:58.572393   95151 main.go:141] libmachine: (ha-273199)     <type>hvm</type>
	I1028 11:52:58.572409   95151 main.go:141] libmachine: (ha-273199)     <boot dev='cdrom'/>
	I1028 11:52:58.572428   95151 main.go:141] libmachine: (ha-273199)     <boot dev='hd'/>
	I1028 11:52:58.572442   95151 main.go:141] libmachine: (ha-273199)     <bootmenu enable='no'/>
	I1028 11:52:58.572452   95151 main.go:141] libmachine: (ha-273199)   </os>
	I1028 11:52:58.572462   95151 main.go:141] libmachine: (ha-273199)   <devices>
	I1028 11:52:58.572470   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='cdrom'>
	I1028 11:52:58.572481   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/boot2docker.iso'/>
	I1028 11:52:58.572489   95151 main.go:141] libmachine: (ha-273199)       <target dev='hdc' bus='scsi'/>
	I1028 11:52:58.572513   95151 main.go:141] libmachine: (ha-273199)       <readonly/>
	I1028 11:52:58.572529   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572544   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='disk'>
	I1028 11:52:58.572557   95151 main.go:141] libmachine: (ha-273199)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:52:58.572570   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk'/>
	I1028 11:52:58.572580   95151 main.go:141] libmachine: (ha-273199)       <target dev='hda' bus='virtio'/>
	I1028 11:52:58.572589   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572599   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572625   95151 main.go:141] libmachine: (ha-273199)       <source network='mk-ha-273199'/>
	I1028 11:52:58.572647   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572659   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572669   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572681   95151 main.go:141] libmachine: (ha-273199)       <source network='default'/>
	I1028 11:52:58.572689   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572698   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572708   95151 main.go:141] libmachine: (ha-273199)     <serial type='pty'>
	I1028 11:52:58.572719   95151 main.go:141] libmachine: (ha-273199)       <target port='0'/>
	I1028 11:52:58.572747   95151 main.go:141] libmachine: (ha-273199)     </serial>
	I1028 11:52:58.572759   95151 main.go:141] libmachine: (ha-273199)     <console type='pty'>
	I1028 11:52:58.572769   95151 main.go:141] libmachine: (ha-273199)       <target type='serial' port='0'/>
	I1028 11:52:58.572780   95151 main.go:141] libmachine: (ha-273199)     </console>
	I1028 11:52:58.572789   95151 main.go:141] libmachine: (ha-273199)     <rng model='virtio'>
	I1028 11:52:58.572801   95151 main.go:141] libmachine: (ha-273199)       <backend model='random'>/dev/random</backend>
	I1028 11:52:58.572815   95151 main.go:141] libmachine: (ha-273199)     </rng>
	I1028 11:52:58.572825   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572833   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572844   95151 main.go:141] libmachine: (ha-273199)   </devices>
	I1028 11:52:58.572852   95151 main.go:141] libmachine: (ha-273199) </domain>
	I1028 11:52:58.572861   95151 main.go:141] libmachine: (ha-273199) 
	I1028 11:52:58.577134   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:42:ba:53 in network default
	I1028 11:52:58.577786   95151 main.go:141] libmachine: (ha-273199) Ensuring networks are active...
	I1028 11:52:58.577821   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:58.578546   95151 main.go:141] libmachine: (ha-273199) Ensuring network default is active
	I1028 11:52:58.578856   95151 main.go:141] libmachine: (ha-273199) Ensuring network mk-ha-273199 is active
	I1028 11:52:58.579358   95151 main.go:141] libmachine: (ha-273199) Getting domain xml...
	I1028 11:52:58.580118   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:59.782570   95151 main.go:141] libmachine: (ha-273199) Waiting to get IP...
	I1028 11:52:59.783496   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:59.783901   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:52:59.783927   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:59.783876   95174 retry.go:31] will retry after 311.934457ms: waiting for machine to come up
	I1028 11:53:00.097445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.097916   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.097939   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.097877   95174 retry.go:31] will retry after 388.795801ms: waiting for machine to come up
	I1028 11:53:00.488689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.489130   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.489162   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.489047   95174 retry.go:31] will retry after 341.439374ms: waiting for machine to come up
	I1028 11:53:00.831825   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.832326   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.832354   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.832259   95174 retry.go:31] will retry after 537.545151ms: waiting for machine to come up
	I1028 11:53:01.371089   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.371572   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.371603   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.371503   95174 retry.go:31] will retry after 575.351282ms: waiting for machine to come up
	I1028 11:53:01.948343   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.948756   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.948778   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.948711   95174 retry.go:31] will retry after 886.467527ms: waiting for machine to come up
	I1028 11:53:02.836558   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:02.836900   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:02.836930   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:02.836853   95174 retry.go:31] will retry after 1.015980502s: waiting for machine to come up
	I1028 11:53:03.854959   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:03.855391   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:03.855437   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:03.855271   95174 retry.go:31] will retry after 1.050486499s: waiting for machine to come up
	I1028 11:53:04.907614   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:04.908201   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:04.908229   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:04.908145   95174 retry.go:31] will retry after 1.491832435s: waiting for machine to come up
	I1028 11:53:06.401910   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:06.402491   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:06.402518   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:06.402445   95174 retry.go:31] will retry after 1.441307708s: waiting for machine to come up
	I1028 11:53:07.846099   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:07.846578   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:07.846619   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:07.846526   95174 retry.go:31] will retry after 2.820165032s: waiting for machine to come up
	I1028 11:53:10.670238   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:10.670586   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:10.670616   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:10.670541   95174 retry.go:31] will retry after 2.961295833s: waiting for machine to come up
	I1028 11:53:13.633316   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:13.633782   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:13.633805   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:13.633732   95174 retry.go:31] will retry after 3.308614209s: waiting for machine to come up
	I1028 11:53:16.945522   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:16.946011   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:16.946110   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:16.946030   95174 retry.go:31] will retry after 3.990479431s: waiting for machine to come up
	I1028 11:53:20.937712   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938109   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has current primary IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938130   95151 main.go:141] libmachine: (ha-273199) Found IP for machine: 192.168.39.208
	I1028 11:53:20.938142   95151 main.go:141] libmachine: (ha-273199) Reserving static IP address...
	I1028 11:53:20.938499   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find host DHCP lease matching {name: "ha-273199", mac: "52:54:00:22:d4:52", ip: "192.168.39.208"} in network mk-ha-273199
	I1028 11:53:21.008969   95151 main.go:141] libmachine: (ha-273199) DBG | Getting to WaitForSSH function...
	I1028 11:53:21.008999   95151 main.go:141] libmachine: (ha-273199) Reserved static IP address: 192.168.39.208
	I1028 11:53:21.009011   95151 main.go:141] libmachine: (ha-273199) Waiting for SSH to be available...
	I1028 11:53:21.011668   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012047   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.012076   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012164   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH client type: external
	I1028 11:53:21.012204   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa (-rw-------)
	I1028 11:53:21.012233   95151 main.go:141] libmachine: (ha-273199) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:53:21.012252   95151 main.go:141] libmachine: (ha-273199) DBG | About to run SSH command:
	I1028 11:53:21.012267   95151 main.go:141] libmachine: (ha-273199) DBG | exit 0
	I1028 11:53:21.139407   95151 main.go:141] libmachine: (ha-273199) DBG | SSH cmd err, output: <nil>: 
	I1028 11:53:21.139608   95151 main.go:141] libmachine: (ha-273199) KVM machine creation complete!
	I1028 11:53:21.140109   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:21.140683   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.140882   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.141093   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:53:21.141114   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:21.142660   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:53:21.142693   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:53:21.142699   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:53:21.142707   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.144906   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145252   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.145272   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145401   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.145570   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145700   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145811   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.145966   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.146169   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.146182   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:53:21.258494   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.258518   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:53:21.258525   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.261399   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.261893   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.261920   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.262110   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.262319   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262635   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.262887   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.263058   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.263068   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:53:21.376384   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:53:21.376474   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:53:21.376484   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:53:21.376495   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376737   95151 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 11:53:21.376768   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376959   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.379689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380146   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.380176   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380378   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.380584   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380744   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380879   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.381094   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.381292   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.381311   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 11:53:21.505313   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 11:53:21.505340   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.507973   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508308   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.508335   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508498   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.508627   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508764   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.509011   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.509180   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.509205   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:53:21.627427   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.627469   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:53:21.627526   95151 buildroot.go:174] setting up certificates
	I1028 11:53:21.627546   95151 provision.go:84] configureAuth start
	I1028 11:53:21.627563   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.627864   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:21.630491   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.630851   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.630879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.631007   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.633459   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.633874   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.633904   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.634035   95151 provision.go:143] copyHostCerts
	I1028 11:53:21.634064   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634109   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:53:21.634121   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634183   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:53:21.634289   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634308   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:53:21.634312   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634344   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:53:21.634423   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634439   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:53:21.634443   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634469   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:53:21.634525   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 11:53:21.941769   95151 provision.go:177] copyRemoteCerts
	I1028 11:53:21.941844   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:53:21.941871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.944301   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944588   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.944615   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944775   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.945004   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.945172   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.945312   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.028802   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:53:22.028910   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:53:22.051394   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:53:22.051457   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:53:22.072047   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:53:22.072099   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:53:22.092704   95151 provision.go:87] duration metric: took 465.141947ms to configureAuth
	I1028 11:53:22.092729   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:53:22.092901   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:22.092986   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.095606   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.095961   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.095988   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.096168   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.096372   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096528   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096655   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.096802   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.096954   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.096969   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:53:22.312757   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:53:22.312785   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:53:22.312806   95151 main.go:141] libmachine: (ha-273199) Calling .GetURL
	I1028 11:53:22.313992   95151 main.go:141] libmachine: (ha-273199) DBG | Using libvirt version 6000000
	I1028 11:53:22.316240   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316567   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.316595   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316828   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:53:22.316850   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:53:22.316861   95151 client.go:171] duration metric: took 24.31961411s to LocalClient.Create
	I1028 11:53:22.316914   95151 start.go:167] duration metric: took 24.319696986s to libmachine.API.Create "ha-273199"
	I1028 11:53:22.316928   95151 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 11:53:22.316942   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:53:22.316962   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.317200   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:53:22.317223   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.319445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320158   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.320178   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320347   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.320534   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.320674   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.320778   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.406034   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:53:22.409957   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:53:22.409983   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:53:22.410056   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:53:22.410194   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:53:22.410209   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:53:22.410362   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:53:22.418934   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:22.439625   95151 start.go:296] duration metric: took 122.683745ms for postStartSetup
	I1028 11:53:22.439684   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:22.440268   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.442702   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443017   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.443035   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443281   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:22.443438   95151 start.go:128] duration metric: took 24.465239541s to createHost
	I1028 11:53:22.443459   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.446282   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446621   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.446650   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446768   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.446935   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447095   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447222   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.447404   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.447574   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.447589   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:53:22.559751   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116402.538168741
	
	I1028 11:53:22.559780   95151 fix.go:216] guest clock: 1730116402.538168741
	I1028 11:53:22.559788   95151 fix.go:229] Guest: 2024-10-28 11:53:22.538168741 +0000 UTC Remote: 2024-10-28 11:53:22.443448629 +0000 UTC m=+24.575720280 (delta=94.720112ms)
	I1028 11:53:22.559821   95151 fix.go:200] guest clock delta is within tolerance: 94.720112ms
	I1028 11:53:22.559826   95151 start.go:83] releasing machines lock for "ha-273199", held for 24.581718789s
	I1028 11:53:22.559851   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.560134   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.562796   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563147   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.563185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563312   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563844   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563988   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.564076   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:53:22.564130   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.564190   95151 ssh_runner.go:195] Run: cat /version.json
	I1028 11:53:22.564216   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.566705   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.566929   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567041   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567064   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567296   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567390   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567416   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567469   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567580   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567668   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567738   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567794   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.567840   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567980   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.670647   95151 ssh_runner.go:195] Run: systemctl --version
	I1028 11:53:22.676078   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:53:22.830303   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:53:22.836224   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:53:22.836288   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:53:22.850695   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:53:22.850718   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:53:22.850775   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:53:22.865306   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:53:22.877632   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:53:22.877682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:53:22.889956   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:53:22.901677   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:53:23.007362   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:53:23.168538   95151 docker.go:233] disabling docker service ...
	I1028 11:53:23.168615   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:53:23.181374   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:53:23.192932   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:53:23.310662   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:53:23.424314   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:53:23.437058   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:53:23.453309   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:53:23.453391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.462468   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:53:23.462530   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.471391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.480284   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.489458   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:53:23.498558   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.507348   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.522430   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.531223   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:53:23.539417   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:53:23.539455   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:53:23.551001   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:53:23.559053   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:23.661360   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:53:23.745420   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:53:23.745500   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:53:23.749645   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:53:23.749737   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:53:23.753175   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:53:23.787639   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:53:23.787732   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.812312   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.837983   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:53:23.839279   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:23.841862   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842156   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:23.842185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842344   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:53:23.845848   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:23.857277   95151 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:53:23.857375   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:23.857429   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:23.885745   95151 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:53:23.885803   95151 ssh_runner.go:195] Run: which lz4
	I1028 11:53:23.889147   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:53:23.889231   95151 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:53:23.892744   95151 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:53:23.892778   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:53:24.999101   95151 crio.go:462] duration metric: took 1.10988801s to copy over tarball
	I1028 11:53:24.999192   95151 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:53:26.940236   95151 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.941006419s)
	I1028 11:53:26.940272   95151 crio.go:469] duration metric: took 1.941134954s to extract the tarball
	I1028 11:53:26.940283   95151 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:53:26.975750   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:27.015231   95151 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:53:27.015255   95151 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:53:27.015267   95151 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 11:53:27.015382   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:53:27.015466   95151 ssh_runner.go:195] Run: crio config
	I1028 11:53:27.056277   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:27.056302   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:27.056316   95151 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:53:27.056348   95151 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:53:27.056497   95151 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:53:27.056525   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:53:27.056581   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:53:27.072483   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:53:27.072593   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:53:27.072658   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:53:27.081034   95151 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:53:27.081092   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:53:27.089111   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:53:27.103592   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:53:27.118272   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:53:27.132197   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:53:27.146233   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:53:27.149485   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:27.160138   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:27.266620   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:53:27.282436   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 11:53:27.282457   95151 certs.go:194] generating shared ca certs ...
	I1028 11:53:27.282478   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.282670   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:53:27.282728   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:53:27.282741   95151 certs.go:256] generating profile certs ...
	I1028 11:53:27.282809   95151 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:53:27.282826   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt with IP's: []
	I1028 11:53:27.352056   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt ...
	I1028 11:53:27.352083   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt: {Name:mk85ba9e2d7e36c2dc386074345191c8f41db2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352257   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key ...
	I1028 11:53:27.352268   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key: {Name:mk9e399a746995b3286d90f34445304b7c10dcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352359   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602
	I1028 11:53:27.352376   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I1028 11:53:27.701864   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 ...
	I1028 11:53:27.701927   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602: {Name:mkd8347f84237c1adf80fa2979e2851e438e06db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702124   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 ...
	I1028 11:53:27.702141   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602: {Name:mk8022b5d8b42b8f2926882e2d9f76f284ea38fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702238   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:53:27.702318   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:53:27.702367   95151 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:53:27.702384   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt with IP's: []
	I1028 11:53:27.887171   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt ...
	I1028 11:53:27.887202   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt: {Name:mk8df5a7b5c3f3d68e29bbf5b564443cc1d6c268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887348   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key ...
	I1028 11:53:27.887359   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key: {Name:mk563997b82cf259c7f4075de274f929660222b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887428   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:53:27.887444   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:53:27.887455   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:53:27.887469   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:53:27.887479   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:53:27.887493   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:53:27.887505   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:53:27.887517   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:53:27.887565   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:53:27.887608   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:53:27.887618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:53:27.887660   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:53:27.887680   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:53:27.887702   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:53:27.887740   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:27.887767   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:53:27.887780   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:27.887797   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:53:27.888376   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:53:27.912711   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:53:27.933465   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:53:27.954641   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:53:27.975959   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:53:27.996205   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:53:28.020327   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:53:28.061582   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:53:28.089945   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:53:28.110791   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:53:28.131009   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:53:28.150891   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:53:28.165153   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:53:28.170365   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:53:28.179779   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183529   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183568   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.188718   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:53:28.197725   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:53:28.206747   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210524   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210567   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.215456   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:53:28.224449   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:53:28.233481   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237734   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237779   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.242623   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:53:28.251661   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:53:28.255167   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:53:28.255214   95151 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:53:28.255281   95151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:53:28.255311   95151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:53:28.288882   95151 cri.go:89] found id: ""
	I1028 11:53:28.288966   95151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:53:28.297523   95151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:53:28.306258   95151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:53:28.314625   95151 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:53:28.314641   95151 kubeadm.go:157] found existing configuration files:
	
	I1028 11:53:28.314676   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:53:28.322612   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:53:28.322668   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:53:28.330792   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:53:28.338690   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:53:28.338727   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:53:28.346773   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.354775   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:53:28.354815   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.362916   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:53:28.370667   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:53:28.370718   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:53:28.378723   95151 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:53:28.563600   95151 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:53:38.972007   95151 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:53:38.972072   95151 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:53:38.972185   95151 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:53:38.972293   95151 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:53:38.972416   95151 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:53:38.972521   95151 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:53:38.974416   95151 out.go:235]   - Generating certificates and keys ...
	I1028 11:53:38.974509   95151 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:53:38.974601   95151 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:53:38.974706   95151 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:53:38.974787   95151 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:53:38.974879   95151 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:53:38.974959   95151 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:53:38.975036   95151 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:53:38.975286   95151 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975365   95151 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:53:38.975516   95151 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975611   95151 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:53:38.975722   95151 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:53:38.975797   95151 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:53:38.975877   95151 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:53:38.975944   95151 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:53:38.976014   95151 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:53:38.976064   95151 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:53:38.976141   95151 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:53:38.976202   95151 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:53:38.976272   95151 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:53:38.976334   95151 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:53:38.977999   95151 out.go:235]   - Booting up control plane ...
	I1028 11:53:38.978106   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:53:38.978178   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:53:38.978240   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:53:38.978347   95151 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:53:38.978445   95151 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:53:38.978486   95151 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:53:38.978635   95151 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:53:38.978759   95151 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:53:38.978849   95151 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498504s
	I1028 11:53:38.978951   95151 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:53:38.979035   95151 kubeadm.go:310] [api-check] The API server is healthy after 5.77087672s
	I1028 11:53:38.979160   95151 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:53:38.979301   95151 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:53:38.979391   95151 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:53:38.979587   95151 kubeadm.go:310] [mark-control-plane] Marking the node ha-273199 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:53:38.979669   95151 kubeadm.go:310] [bootstrap-token] Using token: 2y659k.kh228wx7fnaw6qlw
	I1028 11:53:38.980850   95151 out.go:235]   - Configuring RBAC rules ...
	I1028 11:53:38.980953   95151 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:53:38.981063   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:53:38.981194   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:53:38.981315   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:53:38.981461   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:53:38.981577   95151 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:53:38.981701   95151 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:53:38.981766   95151 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:53:38.981845   95151 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:53:38.981853   95151 kubeadm.go:310] 
	I1028 11:53:38.981937   95151 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:53:38.981950   95151 kubeadm.go:310] 
	I1028 11:53:38.982070   95151 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:53:38.982082   95151 kubeadm.go:310] 
	I1028 11:53:38.982120   95151 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:53:38.982205   95151 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:53:38.982281   95151 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:53:38.982294   95151 kubeadm.go:310] 
	I1028 11:53:38.982369   95151 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:53:38.982381   95151 kubeadm.go:310] 
	I1028 11:53:38.982451   95151 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:53:38.982463   95151 kubeadm.go:310] 
	I1028 11:53:38.982538   95151 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:53:38.982640   95151 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:53:38.982741   95151 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:53:38.982752   95151 kubeadm.go:310] 
	I1028 11:53:38.982827   95151 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:53:38.982895   95151 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:53:38.982901   95151 kubeadm.go:310] 
	I1028 11:53:38.982972   95151 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983065   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:53:38.983084   95151 kubeadm.go:310] 	--control-plane 
	I1028 11:53:38.983090   95151 kubeadm.go:310] 
	I1028 11:53:38.983184   95151 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:53:38.983205   95151 kubeadm.go:310] 
	I1028 11:53:38.983290   95151 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983394   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:53:38.983404   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:38.983412   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:38.985768   95151 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:53:38.987136   95151 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:53:38.992611   95151 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:53:38.992633   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:53:39.010322   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:53:39.369131   95151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:53:39.369214   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199 minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=true
	I1028 11:53:39.369218   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:39.407348   95151 ops.go:34] apiserver oom_adj: -16
	I1028 11:53:39.512261   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.013130   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.512492   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.012760   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.512614   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.013105   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.513113   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.013197   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.130930   95151 kubeadm.go:1113] duration metric: took 3.761785969s to wait for elevateKubeSystemPrivileges
	I1028 11:53:43.130968   95151 kubeadm.go:394] duration metric: took 14.875757661s to StartCluster
	I1028 11:53:43.130992   95151 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.131082   95151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.131868   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.132066   95151 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.132080   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:53:43.132092   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:53:43.132110   95151 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:53:43.132191   95151 addons.go:69] Setting storage-provisioner=true in profile "ha-273199"
	I1028 11:53:43.132211   95151 addons.go:234] Setting addon storage-provisioner=true in "ha-273199"
	I1028 11:53:43.132226   95151 addons.go:69] Setting default-storageclass=true in profile "ha-273199"
	I1028 11:53:43.132243   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.132254   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.132263   95151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-273199"
	I1028 11:53:43.132656   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132704   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.132733   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132778   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.148009   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1028 11:53:43.148148   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I1028 11:53:43.148527   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.148654   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.149031   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149050   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149159   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149183   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149384   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149521   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149709   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.149923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.149968   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.152241   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.152594   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:53:43.153153   95151 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:53:43.153487   95151 addons.go:234] Setting addon default-storageclass=true in "ha-273199"
	I1028 11:53:43.153537   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.153923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.153966   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.165112   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I1028 11:53:43.165628   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.166122   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.166140   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.166447   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.166644   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.168390   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.168673   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1028 11:53:43.169162   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.169675   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.169697   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.170033   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.170484   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.170504   95151 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:53:43.170529   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.172043   95151 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.172062   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:53:43.172076   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.174879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175341   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.175404   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175532   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.175676   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.175782   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.175869   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.188178   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I1028 11:53:43.188778   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.189356   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.189374   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.189736   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.189945   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.191684   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.191903   95151 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.191914   95151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:53:43.191927   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.195100   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195553   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.195576   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195757   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.195929   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.196073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.196212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.240072   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:53:43.320825   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.357607   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.543521   95151 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:53:43.793100   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793126   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793180   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793204   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793468   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793490   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793520   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793527   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793535   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793541   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793554   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793572   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793581   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793594   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793790   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793822   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793830   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793837   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793798   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793900   95151 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:53:43.793919   95151 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:53:43.794073   95151 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:53:43.794085   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.794095   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.794103   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.805561   95151 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:53:43.806144   95151 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:53:43.806158   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.806166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.806169   95151 round_trippers.go:473]     Content-Type: application/json
	I1028 11:53:43.806171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.809243   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:53:43.809609   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.809624   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.809925   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.809942   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.809968   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.812285   95151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:53:43.813517   95151 addons.go:510] duration metric: took 681.412756ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:53:43.813552   95151 start.go:246] waiting for cluster config update ...
	I1028 11:53:43.813579   95151 start.go:255] writing updated cluster config ...
	I1028 11:53:43.815032   95151 out.go:201] 
	I1028 11:53:43.816430   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.816508   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.817974   95151 out.go:177] * Starting "ha-273199-m02" control-plane node in "ha-273199" cluster
	I1028 11:53:43.819185   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:43.819208   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:53:43.819300   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:53:43.819313   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:53:43.819381   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.819558   95151 start.go:360] acquireMachinesLock for ha-273199-m02: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:53:43.819623   95151 start.go:364] duration metric: took 33.288µs to acquireMachinesLock for "ha-273199-m02"
	I1028 11:53:43.819661   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.819740   95151 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:53:43.821273   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:53:43.821359   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.821393   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.836503   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1028 11:53:43.837015   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.837597   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.837620   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.837996   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.838155   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:53:43.838314   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:53:43.838482   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:53:43.838517   95151 client.go:168] LocalClient.Create starting
	I1028 11:53:43.838554   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:53:43.838592   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838613   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838664   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:53:43.838684   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838696   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838711   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:53:43.838718   95151 main.go:141] libmachine: (ha-273199-m02) Calling .PreCreateCheck
	I1028 11:53:43.838865   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:53:43.839217   95151 main.go:141] libmachine: Creating machine...
	I1028 11:53:43.839229   95151 main.go:141] libmachine: (ha-273199-m02) Calling .Create
	I1028 11:53:43.839340   95151 main.go:141] libmachine: (ha-273199-m02) Creating KVM machine...
	I1028 11:53:43.840585   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing default KVM network
	I1028 11:53:43.840677   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing private KVM network mk-ha-273199
	I1028 11:53:43.840819   95151 main.go:141] libmachine: (ha-273199-m02) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:43.840837   95151 main.go:141] libmachine: (ha-273199-m02) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:53:43.840944   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:43.840827   95521 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:43.841035   95151 main.go:141] libmachine: (ha-273199-m02) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:53:44.101967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.101844   95521 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa...
	I1028 11:53:44.215652   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215521   95521 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk...
	I1028 11:53:44.215686   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing magic tar header
	I1028 11:53:44.215700   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing SSH key tar header
	I1028 11:53:44.215717   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215655   95521 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:44.215805   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02
	I1028 11:53:44.215837   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:53:44.215846   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 (perms=drwx------)
	I1028 11:53:44.215856   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:53:44.215863   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:53:44.215873   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:53:44.215879   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:53:44.215889   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:53:44.215894   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:44.215903   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:44.215911   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:53:44.215919   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:53:44.215925   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:53:44.215930   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home
	I1028 11:53:44.215935   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Skipping /home - not owner
	I1028 11:53:44.216891   95151 main.go:141] libmachine: (ha-273199-m02) define libvirt domain using xml: 
	I1028 11:53:44.216918   95151 main.go:141] libmachine: (ha-273199-m02) <domain type='kvm'>
	I1028 11:53:44.216933   95151 main.go:141] libmachine: (ha-273199-m02)   <name>ha-273199-m02</name>
	I1028 11:53:44.216941   95151 main.go:141] libmachine: (ha-273199-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:53:44.216950   95151 main.go:141] libmachine: (ha-273199-m02)   <vcpu>2</vcpu>
	I1028 11:53:44.216957   95151 main.go:141] libmachine: (ha-273199-m02)   <features>
	I1028 11:53:44.216966   95151 main.go:141] libmachine: (ha-273199-m02)     <acpi/>
	I1028 11:53:44.216976   95151 main.go:141] libmachine: (ha-273199-m02)     <apic/>
	I1028 11:53:44.216983   95151 main.go:141] libmachine: (ha-273199-m02)     <pae/>
	I1028 11:53:44.216989   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.216999   95151 main.go:141] libmachine: (ha-273199-m02)   </features>
	I1028 11:53:44.217007   95151 main.go:141] libmachine: (ha-273199-m02)   <cpu mode='host-passthrough'>
	I1028 11:53:44.217034   95151 main.go:141] libmachine: (ha-273199-m02)   
	I1028 11:53:44.217056   95151 main.go:141] libmachine: (ha-273199-m02)   </cpu>
	I1028 11:53:44.217068   95151 main.go:141] libmachine: (ha-273199-m02)   <os>
	I1028 11:53:44.217079   95151 main.go:141] libmachine: (ha-273199-m02)     <type>hvm</type>
	I1028 11:53:44.217093   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='cdrom'/>
	I1028 11:53:44.217102   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='hd'/>
	I1028 11:53:44.217112   95151 main.go:141] libmachine: (ha-273199-m02)     <bootmenu enable='no'/>
	I1028 11:53:44.217123   95151 main.go:141] libmachine: (ha-273199-m02)   </os>
	I1028 11:53:44.217133   95151 main.go:141] libmachine: (ha-273199-m02)   <devices>
	I1028 11:53:44.217140   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='cdrom'>
	I1028 11:53:44.217157   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/boot2docker.iso'/>
	I1028 11:53:44.217172   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:53:44.217183   95151 main.go:141] libmachine: (ha-273199-m02)       <readonly/>
	I1028 11:53:44.217196   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217208   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='disk'>
	I1028 11:53:44.217219   95151 main.go:141] libmachine: (ha-273199-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:53:44.217231   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk'/>
	I1028 11:53:44.217243   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:53:44.217254   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217268   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217279   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='mk-ha-273199'/>
	I1028 11:53:44.217289   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217297   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217306   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217311   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='default'/>
	I1028 11:53:44.217318   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217327   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217340   95151 main.go:141] libmachine: (ha-273199-m02)     <serial type='pty'>
	I1028 11:53:44.217349   95151 main.go:141] libmachine: (ha-273199-m02)       <target port='0'/>
	I1028 11:53:44.217361   95151 main.go:141] libmachine: (ha-273199-m02)     </serial>
	I1028 11:53:44.217372   95151 main.go:141] libmachine: (ha-273199-m02)     <console type='pty'>
	I1028 11:53:44.217382   95151 main.go:141] libmachine: (ha-273199-m02)       <target type='serial' port='0'/>
	I1028 11:53:44.217390   95151 main.go:141] libmachine: (ha-273199-m02)     </console>
	I1028 11:53:44.217400   95151 main.go:141] libmachine: (ha-273199-m02)     <rng model='virtio'>
	I1028 11:53:44.217420   95151 main.go:141] libmachine: (ha-273199-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:53:44.217438   95151 main.go:141] libmachine: (ha-273199-m02)     </rng>
	I1028 11:53:44.217448   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217460   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217472   95151 main.go:141] libmachine: (ha-273199-m02)   </devices>
	I1028 11:53:44.217481   95151 main.go:141] libmachine: (ha-273199-m02) </domain>
	I1028 11:53:44.217489   95151 main.go:141] libmachine: (ha-273199-m02) 
	I1028 11:53:44.223932   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:5f:41:a3 in network default
	I1028 11:53:44.224544   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:44.224583   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring networks are active...
	I1028 11:53:44.225374   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network default is active
	I1028 11:53:44.225816   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network mk-ha-273199 is active
	I1028 11:53:44.226251   95151 main.go:141] libmachine: (ha-273199-m02) Getting domain xml...
	I1028 11:53:44.227023   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:45.439147   95151 main.go:141] libmachine: (ha-273199-m02) Waiting to get IP...
	I1028 11:53:45.440088   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.440554   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.440583   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.440482   95521 retry.go:31] will retry after 269.373557ms: waiting for machine to come up
	I1028 11:53:45.712000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.712443   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.712474   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.712389   95521 retry.go:31] will retry after 298.904949ms: waiting for machine to come up
	I1028 11:53:46.012797   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.013174   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.013203   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.013118   95521 retry.go:31] will retry after 446.110397ms: waiting for machine to come up
	I1028 11:53:46.460766   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.461220   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.461245   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.461168   95521 retry.go:31] will retry after 398.131323ms: waiting for machine to come up
	I1028 11:53:46.860852   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.861266   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.861297   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.861218   95521 retry.go:31] will retry after 575.124652ms: waiting for machine to come up
	I1028 11:53:47.437756   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:47.438185   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:47.438208   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:47.438138   95521 retry.go:31] will retry after 828.228762ms: waiting for machine to come up
	I1028 11:53:48.267451   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:48.267942   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:48.267968   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:48.267911   95521 retry.go:31] will retry after 1.143938031s: waiting for machine to come up
	I1028 11:53:49.414967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:49.415400   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:49.415424   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:49.415361   95521 retry.go:31] will retry after 1.300605887s: waiting for machine to come up
	I1028 11:53:50.717749   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:50.718139   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:50.718173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:50.718072   95521 retry.go:31] will retry after 1.594414229s: waiting for machine to come up
	I1028 11:53:52.314529   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:52.314977   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:52.315000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:52.314931   95521 retry.go:31] will retry after 1.837671448s: waiting for machine to come up
	I1028 11:53:54.154075   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:54.154455   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:54.154488   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:54.154386   95521 retry.go:31] will retry after 2.115441874s: waiting for machine to come up
	I1028 11:53:56.272674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:56.273183   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:56.273216   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:56.273084   95521 retry.go:31] will retry after 3.620483706s: waiting for machine to come up
	I1028 11:53:59.894801   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:59.895232   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:59.895260   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:59.895175   95521 retry.go:31] will retry after 3.99432381s: waiting for machine to come up
	I1028 11:54:03.891608   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892071   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892098   95151 main.go:141] libmachine: (ha-273199-m02) Found IP for machine: 192.168.39.225
	I1028 11:54:03.892108   95151 main.go:141] libmachine: (ha-273199-m02) Reserving static IP address...
	I1028 11:54:03.892498   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find host DHCP lease matching {name: "ha-273199-m02", mac: "52:54:00:ac:c5:96", ip: "192.168.39.225"} in network mk-ha-273199
	I1028 11:54:03.966695   95151 main.go:141] libmachine: (ha-273199-m02) Reserved static IP address: 192.168.39.225
	I1028 11:54:03.966737   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Getting to WaitForSSH function...
	I1028 11:54:03.966746   95151 main.go:141] libmachine: (ha-273199-m02) Waiting for SSH to be available...
	I1028 11:54:03.969754   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970154   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:03.970188   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970315   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH client type: external
	I1028 11:54:03.970338   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa (-rw-------)
	I1028 11:54:03.970367   95151 main.go:141] libmachine: (ha-273199-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:54:03.970390   95151 main.go:141] libmachine: (ha-273199-m02) DBG | About to run SSH command:
	I1028 11:54:03.970403   95151 main.go:141] libmachine: (ha-273199-m02) DBG | exit 0
	I1028 11:54:04.099273   95151 main.go:141] libmachine: (ha-273199-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:54:04.099507   95151 main.go:141] libmachine: (ha-273199-m02) KVM machine creation complete!
	I1028 11:54:04.099831   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:04.100498   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100706   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100853   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:54:04.100870   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetState
	I1028 11:54:04.101944   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:54:04.101958   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:54:04.101966   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:54:04.101973   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.104164   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104483   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.104506   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104767   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.104942   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105094   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105250   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.105441   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.105654   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.105665   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:54:04.218542   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.218568   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:54:04.218578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.221233   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221723   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.221745   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221945   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.222117   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222361   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222486   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.222628   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.222833   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.222844   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:54:04.335872   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:54:04.335945   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:54:04.335957   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:54:04.335971   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336202   95151 buildroot.go:166] provisioning hostname "ha-273199-m02"
	I1028 11:54:04.336228   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336396   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.338798   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.339199   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339341   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.339521   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339681   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339813   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.339995   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.340196   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.340212   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m02 && echo "ha-273199-m02" | sudo tee /etc/hostname
	I1028 11:54:04.470703   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m02
	
	I1028 11:54:04.470739   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.473349   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473761   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.473785   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473981   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.474167   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474373   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474538   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.474717   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.474941   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.474960   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:54:04.595447   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.595480   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:54:04.595502   95151 buildroot.go:174] setting up certificates
	I1028 11:54:04.595513   95151 provision.go:84] configureAuth start
	I1028 11:54:04.595525   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.595847   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:04.598618   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599070   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.599093   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599227   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.601800   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602155   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.602179   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602325   95151 provision.go:143] copyHostCerts
	I1028 11:54:04.602362   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602399   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:54:04.602409   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602488   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:54:04.602621   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602649   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:54:04.602654   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602686   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:54:04.602741   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602762   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:54:04.602770   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602806   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:54:04.602864   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m02 san=[127.0.0.1 192.168.39.225 ha-273199-m02 localhost minikube]
	I1028 11:54:04.712606   95151 provision.go:177] copyRemoteCerts
	I1028 11:54:04.712663   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:54:04.712689   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.715518   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.715885   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.715912   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.716119   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.716297   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.716427   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.716589   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:04.800760   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:54:04.800829   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:54:04.821891   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:54:04.821965   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:54:04.847580   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:54:04.847678   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:54:04.870711   95151 provision.go:87] duration metric: took 275.184548ms to configureAuth
	I1028 11:54:04.870736   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:54:04.870943   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:04.871041   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.873592   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.873927   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.873960   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.874110   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.874287   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874448   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874594   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.874763   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.874973   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.874993   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:54:05.089509   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:54:05.089537   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:54:05.089548   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetURL
	I1028 11:54:05.090747   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using libvirt version 6000000
	I1028 11:54:05.092647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.092983   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.093012   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.093142   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:54:05.093158   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:54:05.093166   95151 client.go:171] duration metric: took 21.254637002s to LocalClient.Create
	I1028 11:54:05.093189   95151 start.go:167] duration metric: took 21.254710604s to libmachine.API.Create "ha-273199"
	I1028 11:54:05.093198   95151 start.go:293] postStartSetup for "ha-273199-m02" (driver="kvm2")
	I1028 11:54:05.093210   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:54:05.093234   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.093471   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:54:05.093501   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.095736   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096090   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.096118   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096277   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.096451   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.096607   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.096752   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.185260   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:54:05.189209   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:54:05.189235   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:54:05.189307   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:54:05.189410   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:54:05.189427   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:54:05.189540   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:54:05.197852   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:05.218582   95151 start.go:296] duration metric: took 125.373729ms for postStartSetup
	I1028 11:54:05.218639   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:05.219202   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.221996   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222347   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.222371   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222675   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:05.222856   95151 start.go:128] duration metric: took 21.403106118s to createHost
	I1028 11:54:05.222880   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.225160   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225457   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.225486   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225646   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.225805   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.225943   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.226048   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.226180   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:05.226400   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:05.226415   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:54:05.335802   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116445.296198293
	
	I1028 11:54:05.335827   95151 fix.go:216] guest clock: 1730116445.296198293
	I1028 11:54:05.335841   95151 fix.go:229] Guest: 2024-10-28 11:54:05.296198293 +0000 UTC Remote: 2024-10-28 11:54:05.222866703 +0000 UTC m=+67.355138355 (delta=73.33159ms)
	I1028 11:54:05.335873   95151 fix.go:200] guest clock delta is within tolerance: 73.33159ms
	I1028 11:54:05.335881   95151 start.go:83] releasing machines lock for "ha-273199-m02", held for 21.516234573s
	I1028 11:54:05.335906   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.336186   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.338574   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.338916   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.338947   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.341021   95151 out.go:177] * Found network options:
	I1028 11:54:05.342553   95151 out.go:177]   - NO_PROXY=192.168.39.208
	W1028 11:54:05.343876   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.343912   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344410   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344601   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344686   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:54:05.344725   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	W1028 11:54:05.344795   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.344870   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:54:05.344892   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.347272   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347603   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.347674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347762   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.347920   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348040   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.348054   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348067   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.348192   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.348264   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.348426   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348717   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.584423   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:54:05.589736   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:54:05.589802   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:54:05.603598   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:54:05.603618   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:54:05.603689   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:54:05.618579   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:54:05.631876   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:54:05.631943   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:54:05.646115   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:54:05.659547   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:54:05.777548   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:54:05.920510   95151 docker.go:233] disabling docker service ...
	I1028 11:54:05.920601   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:54:05.935682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:54:05.948830   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:54:06.089969   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:54:06.214668   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:54:06.227025   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:54:06.243529   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:54:06.243600   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.252888   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:54:06.252945   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.262219   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.271415   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.282109   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:54:06.291692   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.300914   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.316681   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.325900   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:54:06.334164   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:54:06.334217   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:54:06.345295   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:54:06.353414   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:06.469387   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:54:06.564464   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:54:06.564532   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:54:06.570888   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:54:06.570947   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:54:06.574424   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:54:06.609470   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:54:06.609577   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.636484   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.662978   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:54:06.664616   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:54:06.665640   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:06.668607   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.668966   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:06.669004   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.669229   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:54:06.673421   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:06.684696   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:54:06.684909   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:06.685156   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.685193   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.700107   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I1028 11:54:06.700577   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.701057   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.701079   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.701393   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.701590   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:54:06.703274   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:06.703621   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.703693   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.718078   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I1028 11:54:06.718513   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.718987   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.719005   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.719322   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.719504   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:06.719671   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.225
	I1028 11:54:06.719683   95151 certs.go:194] generating shared ca certs ...
	I1028 11:54:06.719702   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.719827   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:54:06.719882   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:54:06.719896   95151 certs.go:256] generating profile certs ...
	I1028 11:54:06.720023   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:54:06.720055   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909
	I1028 11:54:06.720075   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.254]
	I1028 11:54:06.852806   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 ...
	I1028 11:54:06.852843   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909: {Name:mkb8ff493606403d4b0e4c7b0477c06720a08f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853016   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 ...
	I1028 11:54:06.853029   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909: {Name:mkb3a86efc0165669c50f21e172de132f2ce3594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853101   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:54:06.853233   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:54:06.853356   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:54:06.853375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:54:06.853388   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:54:06.853400   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:54:06.853413   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:54:06.853426   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:54:06.853437   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:54:06.853448   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:54:06.853457   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:54:06.853505   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:54:06.853533   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:54:06.853542   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:54:06.853570   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:54:06.853618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:54:06.853648   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:54:06.853686   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:06.853716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:06.853730   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:54:06.853740   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:54:06.853773   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:06.856848   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857257   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:06.857283   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857465   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:06.857654   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:06.857769   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:06.857872   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:06.935983   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:54:06.940830   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:54:06.951512   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:54:06.955415   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:54:06.964440   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:54:06.967840   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:54:06.977901   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:54:06.982116   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:54:06.992655   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:54:06.997042   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:54:07.006289   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:54:07.009936   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:54:07.019550   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:54:07.043269   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:54:07.066117   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:54:07.088035   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:54:07.109468   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:54:07.130767   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:54:07.153514   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:54:07.175748   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:54:07.198209   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:54:07.219569   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:54:07.241366   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:54:07.262724   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:54:07.277348   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:54:07.291720   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:54:07.305550   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:54:07.319528   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:54:07.333567   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:54:07.347382   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:54:07.361182   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:54:07.366165   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:54:07.375271   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379042   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379097   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.384098   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:54:07.393089   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:54:07.402170   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405931   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405973   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.410926   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:54:07.420134   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:54:07.429223   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433088   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433140   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.437953   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:54:07.447048   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:54:07.450389   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:54:07.450445   95151 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1028 11:54:07.450537   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:54:07.450564   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:54:07.450597   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:54:07.463741   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:54:07.463803   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:54:07.463849   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.472253   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:54:07.472293   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.480970   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:54:07.480983   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:54:07.481001   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.481025   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:54:07.481066   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.484605   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:54:07.484635   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:54:08.215699   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.215797   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.220472   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:54:08.220510   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:54:08.302949   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:08.332777   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.332899   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.344780   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:54:08.344827   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:54:08.738465   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:54:08.748651   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 11:54:08.763967   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:54:08.778166   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:54:08.792673   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:54:08.796110   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:08.806415   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:08.913077   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:08.928428   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:08.928936   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:08.929001   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:08.945393   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1028 11:54:08.945922   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:08.946367   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:08.946393   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:08.946734   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:08.946931   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:08.947168   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:54:08.947340   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:54:08.947363   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:08.950295   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.950729   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:08.950759   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.951003   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:08.951292   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:08.951467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:08.951675   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:09.101707   95151 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:09.101780   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I1028 11:54:28.747369   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (19.645557844s)
	I1028 11:54:28.747419   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:54:29.256098   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m02 minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:54:29.382642   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:54:29.487190   95151 start.go:319] duration metric: took 20.540107471s to joinCluster
	I1028 11:54:29.487270   95151 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:29.487603   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:29.489950   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:54:29.491267   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:29.728819   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:29.746970   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:54:29.747328   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:54:29.747474   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:54:29.747814   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:29.747948   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:29.747961   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:29.747972   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:29.747980   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:29.757406   95151 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:54:30.248317   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.248345   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.248355   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.248359   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.255105   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:54:30.748943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.748969   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.748978   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.748984   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.752101   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:31.248899   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.248919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.248928   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.248936   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.251583   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.748337   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.748357   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.748366   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.748371   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.751333   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.751989   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:32.248221   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.248243   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.248251   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.248255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.259191   95151 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:54:32.748148   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.748179   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.748189   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.748194   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.751101   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.249110   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.249135   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.249144   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.249150   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.251769   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.748905   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.748928   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.748937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.748942   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.751961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:33.752497   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:34.248826   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.248847   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.248857   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.248863   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.251279   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:34.748949   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.748976   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.748988   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.748993   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.752114   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:35.248874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.248898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.248906   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.248911   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.251839   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:35.748886   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.748919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.748932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.748940   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.751814   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.248781   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.248808   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.248821   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.248826   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.251662   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.252253   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:36.748294   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.748319   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.748329   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.748343   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.751795   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.248778   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.248807   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.248815   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.248820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.252064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.748876   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.748901   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.748910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.748922   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.752889   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.248910   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.248935   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.248946   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.248951   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.252324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.252974   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:38.748358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.748389   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.748401   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.748410   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.751564   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.248494   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.248515   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.248524   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.248530   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.251902   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.748889   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.748912   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.748920   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.748925   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.751666   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.248637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.248663   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.248675   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.248682   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.251500   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.748631   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.748655   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.748665   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.748671   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.751537   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.752161   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:41.248409   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.248429   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.248437   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.248441   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.251178   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:41.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.748632   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.748641   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.748645   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.751235   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.248135   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.248157   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.248166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.248171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.251061   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.748875   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.748898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.748904   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.748908   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.751883   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.752428   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:43.248728   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.248749   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.248757   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.248760   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.251847   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:43.748532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.748554   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.748562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.748565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.751916   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.248210   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.248233   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.248241   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.248245   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.251111   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:44.749062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.749085   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.749092   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.749096   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.752695   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.753451   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:45.248752   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.248776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.248784   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.248787   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.251702   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:45.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.748635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.748643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.748647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.751481   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:46.248237   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.248261   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.248269   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.248272   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.251677   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:46.748175   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.748196   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.748204   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.748209   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.750924   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.249094   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.249121   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.249133   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.249139   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.251939   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.252527   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:47.748867   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.748890   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.748899   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.748903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.751778   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.248555   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.248585   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.248593   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.248597   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.251510   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.252376   95151 node_ready.go:49] node "ha-273199-m02" has status "Ready":"True"
	I1028 11:54:48.252397   95151 node_ready.go:38] duration metric: took 18.504559305s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:48.252406   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:48.252478   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:48.252487   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.252496   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.252506   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.256049   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:48.261653   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.261730   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:54:48.261739   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.261746   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.261749   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.264166   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.264759   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.264776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.264785   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.264790   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.266666   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.267238   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.267257   95151 pod_ready.go:82] duration metric: took 5.581341ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267267   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267326   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:54:48.267336   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.267346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.267353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.269749   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.270236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.270252   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.270259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.270262   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.272089   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.272472   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.272487   95151 pod_ready.go:82] duration metric: took 5.21491ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272495   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272536   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:54:48.272543   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.272550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.272553   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.274596   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.275004   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.275018   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.275024   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.275028   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.277124   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.277710   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.277730   95151 pod_ready.go:82] duration metric: took 5.229334ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277742   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277804   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:54:48.277816   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.277826   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.277830   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282085   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:48.282776   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.282794   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.282804   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282810   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.284715   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.285139   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.285156   95151 pod_ready.go:82] duration metric: took 7.407951ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.285172   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.449552   95151 request.go:632] Waited for 164.30368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449649   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.449658   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.449662   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.452644   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.649614   95151 request.go:632] Waited for 196.347979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649674   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649678   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.649686   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.649691   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.652639   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.653086   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.653104   95151 pod_ready.go:82] duration metric: took 367.924183ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.653115   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.849567   95151 request.go:632] Waited for 196.382043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849633   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849638   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.849645   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.849650   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.853050   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.049149   95151 request.go:632] Waited for 195.394568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049239   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049247   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.049258   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.049265   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.052619   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.053476   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.053498   95151 pod_ready.go:82] duration metric: took 400.377088ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.053510   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.249514   95151 request.go:632] Waited for 195.91409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249580   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.249588   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.249592   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.252347   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.449321   95151 request.go:632] Waited for 196.389294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449390   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449397   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.449406   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.449409   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.451910   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.452527   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.452552   95151 pod_ready.go:82] duration metric: took 399.03422ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.452565   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.649568   95151 request.go:632] Waited for 196.917152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.649643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.649647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.652785   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.848836   95151 request.go:632] Waited for 195.315288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848913   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848921   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.848932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.848937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.851674   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.852191   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.852210   95151 pod_ready.go:82] duration metric: took 399.639073ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.852221   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.049350   95151 request.go:632] Waited for 197.035616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049425   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049433   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.049443   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.049452   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.052771   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.248743   95151 request.go:632] Waited for 195.280445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248807   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248812   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.248827   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.248832   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.251804   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:50.252387   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.252412   95151 pod_ready.go:82] duration metric: took 400.185555ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.252424   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.449549   95151 request.go:632] Waited for 197.016421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449623   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449628   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.449639   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.449643   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.453027   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.649191   95151 request.go:632] Waited for 195.415709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649276   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649281   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.649289   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.649293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.652536   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.653266   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.653285   95151 pod_ready.go:82] duration metric: took 400.855966ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.653296   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.849376   95151 request.go:632] Waited for 196.004526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849458   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849463   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.849471   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.849475   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.852508   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.049649   95151 request.go:632] Waited for 196.358583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049709   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049715   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.049722   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.049726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.053157   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.053815   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.053835   95151 pod_ready.go:82] duration metric: took 400.533283ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.053846   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.248991   95151 request.go:632] Waited for 195.052058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249059   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249064   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.249072   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.249078   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.252735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.448724   95151 request.go:632] Waited for 195.285595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448790   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448806   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.448820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.448825   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.452721   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.453238   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.453263   95151 pod_ready.go:82] duration metric: took 399.409754ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.453278   95151 pod_ready.go:39] duration metric: took 3.200858022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:51.453306   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:54:51.453378   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:54:51.468618   95151 api_server.go:72] duration metric: took 21.98130215s to wait for apiserver process to appear ...
	I1028 11:54:51.468648   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:54:51.468673   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:54:51.472937   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:54:51.473008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:54:51.473014   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.473022   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.473030   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.473790   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:54:51.473893   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:54:51.473910   95151 api_server.go:131] duration metric: took 5.255617ms to wait for apiserver health ...
	I1028 11:54:51.473917   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:54:51.649350   95151 request.go:632] Waited for 175.3296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649418   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649424   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.649431   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.649436   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.653819   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:51.658610   95151 system_pods.go:59] 17 kube-system pods found
	I1028 11:54:51.658641   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:51.658646   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:51.658651   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:51.658654   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:51.658657   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:51.658660   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:51.658664   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:51.658669   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:51.658674   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:51.658682   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:51.658691   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:51.658696   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:51.658700   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:51.658704   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:51.658707   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:51.658710   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:51.658715   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:51.658722   95151 system_pods.go:74] duration metric: took 184.79709ms to wait for pod list to return data ...
	I1028 11:54:51.658732   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:54:51.849471   95151 request.go:632] Waited for 190.648261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849537   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.849546   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.849549   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.853472   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.853716   95151 default_sa.go:45] found service account: "default"
	I1028 11:54:51.853732   95151 default_sa.go:55] duration metric: took 194.991571ms for default service account to be created ...
	I1028 11:54:51.853742   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:54:52.049206   95151 request.go:632] Waited for 195.38768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049272   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049279   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.049287   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.049293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.055256   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:54:52.060109   95151 system_pods.go:86] 17 kube-system pods found
	I1028 11:54:52.060133   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:52.060139   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:52.060143   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:52.060147   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:52.060151   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:52.060154   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:52.060158   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:52.060162   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:52.060166   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:52.060171   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:52.060175   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:52.060178   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:52.060182   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:52.060185   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:52.060188   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:52.060192   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:52.060196   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:52.060203   95151 system_pods.go:126] duration metric: took 206.45399ms to wait for k8s-apps to be running ...
	I1028 11:54:52.060213   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:54:52.060255   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:52.076447   95151 system_svc.go:56] duration metric: took 16.226067ms WaitForService to wait for kubelet
	I1028 11:54:52.076476   95151 kubeadm.go:582] duration metric: took 22.589167548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:54:52.076506   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:54:52.248935   95151 request.go:632] Waited for 172.334931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.248998   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.249004   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.249011   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.249015   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.252625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:52.253475   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253500   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253515   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253518   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253523   95151 node_conditions.go:105] duration metric: took 177.008634ms to run NodePressure ...
	I1028 11:54:52.253537   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:54:52.253563   95151 start.go:255] writing updated cluster config ...
	I1028 11:54:52.255885   95151 out.go:201] 
	I1028 11:54:52.257299   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:52.257397   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.258847   95151 out.go:177] * Starting "ha-273199-m03" control-plane node in "ha-273199" cluster
	I1028 11:54:52.259962   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:54:52.259986   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:54:52.260095   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:54:52.260118   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:54:52.260241   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.260461   95151 start.go:360] acquireMachinesLock for ha-273199-m03: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:54:52.260509   95151 start.go:364] duration metric: took 28.17µs to acquireMachinesLock for "ha-273199-m03"
	I1028 11:54:52.260527   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:52.260626   95151 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:54:52.262400   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:54:52.262503   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:52.262543   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:52.277859   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I1028 11:54:52.278262   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:52.278738   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:52.278759   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:52.279160   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:52.279351   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:54:52.279503   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:54:52.279669   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:54:52.279701   95151 client.go:168] LocalClient.Create starting
	I1028 11:54:52.279735   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:54:52.279771   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279787   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279863   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:54:52.279888   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279905   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279929   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:54:52.279940   95151 main.go:141] libmachine: (ha-273199-m03) Calling .PreCreateCheck
	I1028 11:54:52.280085   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:54:52.280426   95151 main.go:141] libmachine: Creating machine...
	I1028 11:54:52.280439   95151 main.go:141] libmachine: (ha-273199-m03) Calling .Create
	I1028 11:54:52.280557   95151 main.go:141] libmachine: (ha-273199-m03) Creating KVM machine...
	I1028 11:54:52.281865   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing default KVM network
	I1028 11:54:52.281971   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing private KVM network mk-ha-273199
	I1028 11:54:52.282111   95151 main.go:141] libmachine: (ha-273199-m03) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.282133   95151 main.go:141] libmachine: (ha-273199-m03) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:54:52.282187   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.282077   95896 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.282257   95151 main.go:141] libmachine: (ha-273199-m03) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:54:52.559668   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.559518   95896 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa...
	I1028 11:54:52.735541   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.735336   95896 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk...
	I1028 11:54:52.735589   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing magic tar header
	I1028 11:54:52.735964   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing SSH key tar header
	I1028 11:54:52.736074   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.736016   95896 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.736145   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03
	I1028 11:54:52.736240   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 (perms=drwx------)
	I1028 11:54:52.736277   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:54:52.736290   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:54:52.736342   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:54:52.736362   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:54:52.736375   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.736394   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:54:52.736406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:54:52.736415   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:54:52.736428   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:54:52.736436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home
	I1028 11:54:52.736447   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:54:52.736462   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:52.736473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Skipping /home - not owner
	I1028 11:54:52.737378   95151 main.go:141] libmachine: (ha-273199-m03) define libvirt domain using xml: 
	I1028 11:54:52.737401   95151 main.go:141] libmachine: (ha-273199-m03) <domain type='kvm'>
	I1028 11:54:52.737412   95151 main.go:141] libmachine: (ha-273199-m03)   <name>ha-273199-m03</name>
	I1028 11:54:52.737420   95151 main.go:141] libmachine: (ha-273199-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:54:52.737428   95151 main.go:141] libmachine: (ha-273199-m03)   <vcpu>2</vcpu>
	I1028 11:54:52.737434   95151 main.go:141] libmachine: (ha-273199-m03)   <features>
	I1028 11:54:52.737442   95151 main.go:141] libmachine: (ha-273199-m03)     <acpi/>
	I1028 11:54:52.737451   95151 main.go:141] libmachine: (ha-273199-m03)     <apic/>
	I1028 11:54:52.737465   95151 main.go:141] libmachine: (ha-273199-m03)     <pae/>
	I1028 11:54:52.737475   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737485   95151 main.go:141] libmachine: (ha-273199-m03)   </features>
	I1028 11:54:52.737498   95151 main.go:141] libmachine: (ha-273199-m03)   <cpu mode='host-passthrough'>
	I1028 11:54:52.737507   95151 main.go:141] libmachine: (ha-273199-m03)   
	I1028 11:54:52.737512   95151 main.go:141] libmachine: (ha-273199-m03)   </cpu>
	I1028 11:54:52.737516   95151 main.go:141] libmachine: (ha-273199-m03)   <os>
	I1028 11:54:52.737521   95151 main.go:141] libmachine: (ha-273199-m03)     <type>hvm</type>
	I1028 11:54:52.737530   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='cdrom'/>
	I1028 11:54:52.737537   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='hd'/>
	I1028 11:54:52.737549   95151 main.go:141] libmachine: (ha-273199-m03)     <bootmenu enable='no'/>
	I1028 11:54:52.737555   95151 main.go:141] libmachine: (ha-273199-m03)   </os>
	I1028 11:54:52.737566   95151 main.go:141] libmachine: (ha-273199-m03)   <devices>
	I1028 11:54:52.737573   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='cdrom'>
	I1028 11:54:52.737605   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/boot2docker.iso'/>
	I1028 11:54:52.737626   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:54:52.737633   95151 main.go:141] libmachine: (ha-273199-m03)       <readonly/>
	I1028 11:54:52.737643   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737649   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='disk'>
	I1028 11:54:52.737657   95151 main.go:141] libmachine: (ha-273199-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:54:52.737664   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk'/>
	I1028 11:54:52.737674   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:54:52.737679   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737686   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737691   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='mk-ha-273199'/>
	I1028 11:54:52.737697   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737702   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737709   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737714   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='default'/>
	I1028 11:54:52.737721   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737725   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737736   95151 main.go:141] libmachine: (ha-273199-m03)     <serial type='pty'>
	I1028 11:54:52.737741   95151 main.go:141] libmachine: (ha-273199-m03)       <target port='0'/>
	I1028 11:54:52.737750   95151 main.go:141] libmachine: (ha-273199-m03)     </serial>
	I1028 11:54:52.737755   95151 main.go:141] libmachine: (ha-273199-m03)     <console type='pty'>
	I1028 11:54:52.737764   95151 main.go:141] libmachine: (ha-273199-m03)       <target type='serial' port='0'/>
	I1028 11:54:52.737796   95151 main.go:141] libmachine: (ha-273199-m03)     </console>
	I1028 11:54:52.737822   95151 main.go:141] libmachine: (ha-273199-m03)     <rng model='virtio'>
	I1028 11:54:52.737835   95151 main.go:141] libmachine: (ha-273199-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:54:52.737849   95151 main.go:141] libmachine: (ha-273199-m03)     </rng>
	I1028 11:54:52.737862   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737871   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737883   95151 main.go:141] libmachine: (ha-273199-m03)   </devices>
	I1028 11:54:52.737895   95151 main.go:141] libmachine: (ha-273199-m03) </domain>
	I1028 11:54:52.737906   95151 main.go:141] libmachine: (ha-273199-m03) 
	I1028 11:54:52.744674   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:8b:32:6e in network default
	I1028 11:54:52.745255   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring networks are active...
	I1028 11:54:52.745282   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:52.745947   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network default is active
	I1028 11:54:52.746212   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network mk-ha-273199 is active
	I1028 11:54:52.746662   95151 main.go:141] libmachine: (ha-273199-m03) Getting domain xml...
	I1028 11:54:52.747399   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:53.955503   95151 main.go:141] libmachine: (ha-273199-m03) Waiting to get IP...
	I1028 11:54:53.956506   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:53.956900   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:53.956929   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:53.956873   95896 retry.go:31] will retry after 206.527377ms: waiting for machine to come up
	I1028 11:54:54.165229   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.165718   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.165747   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.165667   95896 retry.go:31] will retry after 298.714532ms: waiting for machine to come up
	I1028 11:54:54.466211   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.466648   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.466677   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.466592   95896 retry.go:31] will retry after 313.294403ms: waiting for machine to come up
	I1028 11:54:54.781194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.781751   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.781781   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.781697   95896 retry.go:31] will retry after 490.276773ms: waiting for machine to come up
	I1028 11:54:55.273485   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:55.273980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:55.274010   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:55.273908   95896 retry.go:31] will retry after 747.967363ms: waiting for machine to come up
	I1028 11:54:56.023947   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.024406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.024436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.024354   95896 retry.go:31] will retry after 879.955575ms: waiting for machine to come up
	I1028 11:54:56.905338   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.905786   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.905854   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.905727   95896 retry.go:31] will retry after 900.403526ms: waiting for machine to come up
	I1028 11:54:57.807987   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:57.808508   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:57.808532   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:57.808456   95896 retry.go:31] will retry after 915.528727ms: waiting for machine to come up
	I1028 11:54:58.725704   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:58.726141   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:58.726171   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:58.726079   95896 retry.go:31] will retry after 1.589094397s: waiting for machine to come up
	I1028 11:55:00.316739   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:00.317159   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:00.317192   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:00.317103   95896 retry.go:31] will retry after 2.113867198s: waiting for machine to come up
	I1028 11:55:02.432898   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:02.433399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:02.433425   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:02.433344   95896 retry.go:31] will retry after 2.28050393s: waiting for machine to come up
	I1028 11:55:04.716742   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:04.717181   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:04.717204   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:04.717143   95896 retry.go:31] will retry after 2.249398536s: waiting for machine to come up
	I1028 11:55:06.969577   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:06.970058   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:06.970080   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:06.970033   95896 retry.go:31] will retry after 2.958136846s: waiting for machine to come up
	I1028 11:55:09.929637   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:09.930041   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:09.930070   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:09.929982   95896 retry.go:31] will retry after 4.070894756s: waiting for machine to come up
	I1028 11:55:14.002837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003301   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has current primary IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003323   95151 main.go:141] libmachine: (ha-273199-m03) Found IP for machine: 192.168.39.14
	I1028 11:55:14.003336   95151 main.go:141] libmachine: (ha-273199-m03) Reserving static IP address...
	I1028 11:55:14.003697   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "ha-273199-m03", mac: "52:54:00:46:1d:e9", ip: "192.168.39.14"} in network mk-ha-273199
	I1028 11:55:14.078161   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:14.078198   95151 main.go:141] libmachine: (ha-273199-m03) Reserved static IP address: 192.168.39.14
	I1028 11:55:14.078221   95151 main.go:141] libmachine: (ha-273199-m03) Waiting for SSH to be available...
	I1028 11:55:14.080426   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.080837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199
	I1028 11:55:14.080864   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find defined IP address of network mk-ha-273199 interface with MAC address 52:54:00:46:1d:e9
	I1028 11:55:14.080998   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:14.081020   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:14.081088   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:14.081126   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:14.081172   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:14.084960   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 11:55:14.084981   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 11:55:14.084988   95151 main.go:141] libmachine: (ha-273199-m03) DBG | command : exit 0
	I1028 11:55:14.084993   95151 main.go:141] libmachine: (ha-273199-m03) DBG | err     : exit status 255
	I1028 11:55:14.084999   95151 main.go:141] libmachine: (ha-273199-m03) DBG | output  : 
	I1028 11:55:17.085220   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:17.087584   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.087980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.088014   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.088124   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:17.088151   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:17.088186   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:17.088203   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:17.088242   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:17.219250   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:55:17.219518   95151 main.go:141] libmachine: (ha-273199-m03) KVM machine creation complete!
	I1028 11:55:17.219876   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:17.220483   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220685   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220845   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:55:17.220861   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetState
	I1028 11:55:17.222309   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:55:17.222328   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:55:17.222335   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:55:17.222343   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.224588   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.224925   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.224952   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.225089   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.225238   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225410   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225535   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.225685   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.225933   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.225948   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:55:17.334782   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.334812   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:55:17.334821   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.337833   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338269   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.338297   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338479   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.338845   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339007   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339176   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.339341   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.339539   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.339557   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:55:17.451978   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:55:17.452046   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:55:17.452059   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:55:17.452070   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452277   95151 buildroot.go:166] provisioning hostname "ha-273199-m03"
	I1028 11:55:17.452288   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452476   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.455103   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455535   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.455562   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455708   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.455867   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.455984   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.456067   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.456198   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.456408   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.456424   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m03 && echo "ha-273199-m03" | sudo tee /etc/hostname
	I1028 11:55:17.580666   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m03
	
	I1028 11:55:17.580700   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.583194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583511   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.583528   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583802   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.584016   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584194   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584336   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.584491   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.584694   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.584718   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:55:17.704448   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.704483   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:55:17.704502   95151 buildroot.go:174] setting up certificates
	I1028 11:55:17.704515   95151 provision.go:84] configureAuth start
	I1028 11:55:17.704525   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.704814   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:17.707324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707661   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.707690   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707847   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.710530   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710812   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.710834   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710987   95151 provision.go:143] copyHostCerts
	I1028 11:55:17.711016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711055   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:55:17.711067   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711144   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:55:17.711240   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711266   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:55:17.711274   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711309   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:55:17.711375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711397   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:55:17.711406   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711441   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:55:17.711512   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m03 san=[127.0.0.1 192.168.39.14 ha-273199-m03 localhost minikube]
	I1028 11:55:17.872732   95151 provision.go:177] copyRemoteCerts
	I1028 11:55:17.872791   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:55:17.872822   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.875766   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876231   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.876275   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876474   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.876674   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.876862   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.877007   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:17.961016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:55:17.961081   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:55:17.984138   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:55:17.984226   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:55:18.008131   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:55:18.008227   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:55:18.031369   95151 provision.go:87] duration metric: took 326.838997ms to configureAuth
	I1028 11:55:18.031405   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:55:18.031687   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:18.031768   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.034245   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034499   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.034512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034834   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.035030   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035212   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035366   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.035511   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.035733   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.035755   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:55:18.272929   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:55:18.272957   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:55:18.272965   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetURL
	I1028 11:55:18.274324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using libvirt version 6000000
	I1028 11:55:18.276917   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277260   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.277286   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277469   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:55:18.277495   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:55:18.277503   95151 client.go:171] duration metric: took 25.997791015s to LocalClient.Create
	I1028 11:55:18.277533   95151 start.go:167] duration metric: took 25.997864783s to libmachine.API.Create "ha-273199"
	I1028 11:55:18.277545   95151 start.go:293] postStartSetup for "ha-273199-m03" (driver="kvm2")
	I1028 11:55:18.277554   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:55:18.277570   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.277772   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:55:18.277797   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.280107   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.280500   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280672   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.280818   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.280972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.281096   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.364949   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:55:18.368679   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:55:18.368702   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:55:18.368765   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:55:18.368831   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:55:18.368841   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:55:18.368936   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:55:18.377576   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:18.398595   95151 start.go:296] duration metric: took 121.036125ms for postStartSetup
	I1028 11:55:18.398663   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:18.399226   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.401512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.401817   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.401845   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.402086   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:55:18.402271   95151 start.go:128] duration metric: took 26.1416351s to createHost
	I1028 11:55:18.402293   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.404399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404785   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.404814   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.405120   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405233   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405349   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.405479   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.405697   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.405707   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:55:18.516101   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116518.496273878
	
	I1028 11:55:18.516127   95151 fix.go:216] guest clock: 1730116518.496273878
	I1028 11:55:18.516135   95151 fix.go:229] Guest: 2024-10-28 11:55:18.496273878 +0000 UTC Remote: 2024-10-28 11:55:18.402282303 +0000 UTC m=+140.534554028 (delta=93.991575ms)
	I1028 11:55:18.516153   95151 fix.go:200] guest clock delta is within tolerance: 93.991575ms
	I1028 11:55:18.516160   95151 start.go:83] releasing machines lock for "ha-273199-m03", held for 26.255640766s
	I1028 11:55:18.516185   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.516440   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.519412   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.519815   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.519848   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.524337   95151 out.go:177] * Found network options:
	I1028 11:55:18.525743   95151 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.225
	W1028 11:55:18.527126   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.527158   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.527179   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527726   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527918   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.528047   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:55:18.528091   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	W1028 11:55:18.528116   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.528141   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.528213   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:55:18.528236   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.531068   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531433   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531460   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531507   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531598   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.531771   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.531976   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531993   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532001   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.532119   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.532160   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.532259   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.532384   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532522   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.778405   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:55:18.783655   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:55:18.783756   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:55:18.797677   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:55:18.797700   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:55:18.797761   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:55:18.814061   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:55:18.825773   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:55:18.825825   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:55:18.837935   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:55:18.849554   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:55:18.965481   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:55:19.099249   95151 docker.go:233] disabling docker service ...
	I1028 11:55:19.099323   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:55:19.113114   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:55:19.124849   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:55:19.250769   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:55:19.359879   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:55:19.373349   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:55:19.389521   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:55:19.389615   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.398854   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:55:19.398906   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.407802   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.417192   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.427164   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:55:19.436640   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.445835   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.462270   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.471609   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:55:19.480345   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:55:19.480383   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:55:19.492803   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:55:19.501227   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:19.617782   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:55:19.703544   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:55:19.703660   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:55:19.708269   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:55:19.708326   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:55:19.712086   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:55:19.749930   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:55:19.750010   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.775811   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.801952   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:55:19.803114   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:55:19.804273   95151 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.225
	I1028 11:55:19.805417   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:19.808218   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808625   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:19.808655   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808919   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:55:19.812627   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:19.824073   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:55:19.824319   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:19.824582   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.824620   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.838910   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I1028 11:55:19.839306   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.839763   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.839782   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.840142   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.840307   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:55:19.841569   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:19.841856   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.841897   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.855881   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1028 11:55:19.856375   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.856826   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.856843   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.857163   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.857327   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:19.857467   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.14
	I1028 11:55:19.857480   95151 certs.go:194] generating shared ca certs ...
	I1028 11:55:19.857496   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.857646   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:55:19.857702   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:55:19.857720   95151 certs.go:256] generating profile certs ...
	I1028 11:55:19.857827   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:55:19.857863   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7
	I1028 11:55:19.857891   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 11:55:19.946624   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 ...
	I1028 11:55:19.946653   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7: {Name:mk3236f0712e0310e6a0f8a3941b2eeadd0570c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946816   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 ...
	I1028 11:55:19.946829   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7: {Name:mka0c613afe4278aca8a4ff26ddba521c4e341b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946908   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:55:19.947042   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:55:19.947166   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:55:19.947182   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:55:19.947196   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:55:19.947208   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:55:19.947221   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:55:19.947233   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:55:19.947245   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:55:19.947256   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:55:19.967716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:55:19.967802   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:55:19.967847   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:55:19.967864   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:55:19.967899   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:55:19.967933   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:55:19.967965   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:55:19.968019   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:19.968051   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:55:19.968066   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:55:19.968076   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:19.968113   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:19.971063   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971502   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:19.971527   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971715   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:19.971902   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:19.972073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:19.972212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:20.047980   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:55:20.052462   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:55:20.063257   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:55:20.067603   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:55:20.083360   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:55:20.087209   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:55:20.096958   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:55:20.100595   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:55:20.113829   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:55:20.117648   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:55:20.126859   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:55:20.130471   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:55:20.139759   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:55:20.167843   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:55:20.191233   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:55:20.214438   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:55:20.235571   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:55:20.261436   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:55:20.285034   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:55:20.310624   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:55:20.332555   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:55:20.354176   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:55:20.374974   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:55:20.396001   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:55:20.411032   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:55:20.426186   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:55:20.441112   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:55:20.456730   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:55:20.472441   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:55:20.488012   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:55:20.502635   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:55:20.508164   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:55:20.519601   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523711   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523777   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.529016   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:55:20.538537   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:55:20.548100   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552319   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552375   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.557900   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:55:20.567792   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:55:20.577338   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581264   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581323   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.586529   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:55:20.596428   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:55:20.600115   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:55:20.600167   95151 kubeadm.go:934] updating node {m03 192.168.39.14 8443 v1.31.2 crio true true} ...
	I1028 11:55:20.600258   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:55:20.600291   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:55:20.600325   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:55:20.616989   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:55:20.617099   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:55:20.617151   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.626357   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:55:20.626409   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.634842   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:55:20.634876   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634922   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:55:20.634942   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634948   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.634853   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:55:20.635007   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.635050   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:55:20.638692   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:55:20.638722   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:55:20.663836   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.663872   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:55:20.663905   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:55:20.663970   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.699827   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:55:20.699877   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:55:21.384145   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:55:21.393997   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:55:21.409884   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:55:21.425811   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:55:21.441992   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:55:21.445803   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:21.457453   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:21.579499   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:21.596582   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:21.597031   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:21.597081   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:21.612568   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1028 11:55:21.613014   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:21.613608   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:21.613636   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:21.613983   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:21.614133   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:21.614251   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:55:21.614418   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:55:21.614445   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:21.617174   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617565   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:21.617589   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617762   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:21.617923   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:21.618054   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:21.618200   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:21.766904   95151 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:21.766967   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I1028 11:55:42.707746   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (20.940747813s)
	I1028 11:55:42.707786   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:55:43.259520   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m03 minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:55:43.364349   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:55:43.486876   95151 start.go:319] duration metric: took 21.872622243s to joinCluster
	I1028 11:55:43.486974   95151 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:43.487346   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:43.488385   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:55:43.489624   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:43.714323   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:43.797310   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:55:43.797585   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:55:43.797659   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:55:43.797894   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:55:43.797978   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:43.797989   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:43.797999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:43.798002   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:43.801478   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.298184   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.298206   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.298216   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.298222   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.301984   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.798900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.798925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.798933   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.798937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.802625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.298286   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.298308   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.298316   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.298323   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.301749   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.798575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.798599   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.798606   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.798609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.801730   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.802260   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:46.298797   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.298831   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.298843   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.298848   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.301856   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:46.798975   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.798994   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.799003   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.799009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.802334   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.298943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.298969   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.298981   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.298987   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.302012   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.799134   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.799156   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.799164   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.799170   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.802967   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.803491   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:48.298732   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.298760   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.298772   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.298778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.302148   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:48.799142   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.799170   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.799182   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.799190   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.802961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.298717   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.298741   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.298752   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.298759   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.302024   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.798693   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.798713   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.798721   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.798726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.832585   95151 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1028 11:55:49.833180   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:50.298166   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.298188   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.298197   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.298201   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.301302   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:50.798073   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.798095   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.798104   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.798108   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.803748   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:55:51.298872   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.298899   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.298910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.298913   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.301397   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:51.798388   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.798420   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.798428   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.798434   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.801659   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:52.298527   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.298549   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.298561   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.298565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.301585   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:52.302112   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:52.798187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.798212   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.798223   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.798228   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.801528   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.298514   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.298542   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.298550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.298554   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.301689   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.798539   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.798559   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.798574   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.798578   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.801491   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:54.298293   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.298317   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.298325   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.298330   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.302064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:54.302719   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:54.798749   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.798769   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.798778   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.798783   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.801841   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.298678   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.298701   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.298712   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.298716   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.302094   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.798085   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.798105   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.798113   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.798116   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.800935   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:56.298920   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.298949   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.298958   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.298962   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.302100   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.798358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.798381   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.798390   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.798394   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.801648   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.802259   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:57.298900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.298925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.298937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.298943   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.301768   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:57.798111   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.798136   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.798148   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.798154   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.802245   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:55:58.299121   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.299149   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.299162   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.299171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.302703   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:58.798590   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.798615   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.798628   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.798634   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.801208   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:59.299008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.299036   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.299047   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.299054   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.302735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:59.303420   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:59.798874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.798896   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.798903   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.798907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.802046   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.298533   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.298555   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.298562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.298567   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.301628   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.798592   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.798612   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.798619   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.798623   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.801213   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.298108   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.298133   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.298143   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.298148   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301184   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.301784   95151 node_ready.go:49] node "ha-273199-m03" has status "Ready":"True"
	I1028 11:56:01.301805   95151 node_ready.go:38] duration metric: took 17.503895303s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:56:01.301814   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:01.301887   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:01.301896   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.301903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301911   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.308580   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:01.316771   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.316873   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:56:01.316885   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.316900   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.316907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.320308   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.320987   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.321003   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.321013   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.321019   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.323787   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.324347   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.324365   95151 pod_ready.go:82] duration metric: took 7.565058ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324373   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324419   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:56:01.324427   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.324433   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.324439   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.326735   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.327335   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.327355   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.327365   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.327373   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.329530   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.330057   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.330074   95151 pod_ready.go:82] duration metric: took 5.693547ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330086   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330136   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:56:01.330146   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.330155   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.330165   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.332526   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.332999   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.333016   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.333027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.333032   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.334989   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.335422   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.335440   95151 pod_ready.go:82] duration metric: took 5.348301ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335448   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335488   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:56:01.335496   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.335502   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.335506   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.337739   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.338582   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:01.338597   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.338604   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.338609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.340562   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.341152   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.341169   95151 pod_ready.go:82] duration metric: took 5.715551ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.341177   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.498553   95151 request.go:632] Waited for 157.309109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498638   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498650   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.498660   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.498665   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.501894   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.699071   95151 request.go:632] Waited for 196.385515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699155   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699161   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.699169   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.699174   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.702324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.702894   95151 pod_ready.go:93] pod "etcd-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.702916   95151 pod_ready.go:82] duration metric: took 361.733856ms for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.702934   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.898705   95151 request.go:632] Waited for 195.691939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898957   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898985   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.898999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.899009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.902374   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.098254   95151 request.go:632] Waited for 195.287162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098328   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098335   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.098347   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.098353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.101196   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.101738   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.101763   95151 pod_ready.go:82] duration metric: took 398.820372ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.101781   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.298212   95151 request.go:632] Waited for 196.275952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298275   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298281   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.298290   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.298301   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.301860   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.499036   95151 request.go:632] Waited for 196.376254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499126   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499138   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.499147   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.499155   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.502306   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.502777   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.502797   95151 pod_ready.go:82] duration metric: took 401.004802ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.502809   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.698962   95151 request.go:632] Waited for 196.058055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699040   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699049   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.699060   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.699069   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.702304   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.898265   95151 request.go:632] Waited for 195.32967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898332   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898337   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.898346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.898349   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.901285   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.901755   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.901774   95151 pod_ready.go:82] duration metric: took 398.957477ms for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.901786   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.098215   95151 request.go:632] Waited for 196.338003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098302   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098312   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.098326   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.098336   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.101391   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.299109   95151 request.go:632] Waited for 197.052748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299198   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.299211   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.299219   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.302429   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.303124   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.303143   95151 pod_ready.go:82] duration metric: took 401.346731ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.303154   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.499186   95151 request.go:632] Waited for 195.929738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499255   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499260   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.499268   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.499283   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.502463   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.698544   95151 request.go:632] Waited for 195.349647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698622   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698627   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.698635   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.698642   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.701741   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.702403   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.702426   95151 pod_ready.go:82] duration metric: took 399.264829ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.702441   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.898913   95151 request.go:632] Waited for 196.399022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899002   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899011   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.899023   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.899029   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.902056   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.099025   95151 request.go:632] Waited for 196.30082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099105   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099116   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.099127   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.099137   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.102284   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.102800   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.102822   95151 pod_ready.go:82] duration metric: took 400.371733ms for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.102837   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.299058   95151 request.go:632] Waited for 196.137259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299139   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299144   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.299153   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.299157   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.302746   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.499079   95151 request.go:632] Waited for 195.393701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499163   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499171   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.499185   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.499195   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.503387   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:56:04.504037   95151 pod_ready.go:93] pod "kube-proxy-9g4h7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.504061   95151 pod_ready.go:82] duration metric: took 401.216048ms for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.504076   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.698976   95151 request.go:632] Waited for 194.814472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699071   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.699079   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.699084   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.702055   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:04.898609   95151 request.go:632] Waited for 195.739677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898675   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898683   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.898693   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.898700   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.901923   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.902584   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.902605   95151 pod_ready.go:82] duration metric: took 398.518978ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.902614   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.098688   95151 request.go:632] Waited for 195.978821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098754   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098759   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.098768   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.098778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.102003   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.298290   95151 request.go:632] Waited for 195.293864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298361   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298369   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.298380   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.298386   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.301816   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.302344   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.302364   95151 pod_ready.go:82] duration metric: took 399.743307ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.302375   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.498499   95151 request.go:632] Waited for 196.032121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498559   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498565   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.498572   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.498584   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.501658   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.698555   95151 request.go:632] Waited for 196.349621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698639   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.698659   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.698670   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.701856   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.702478   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.702502   95151 pod_ready.go:82] duration metric: took 400.117869ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.702516   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.898432   95151 request.go:632] Waited for 195.801686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898504   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898512   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.898523   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.898535   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.901090   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:06.099148   95151 request.go:632] Waited for 197.39166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099243   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099256   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.099266   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.099273   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.102573   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.103298   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.103317   95151 pod_ready.go:82] duration metric: took 400.794152ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.103328   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.298494   95151 request.go:632] Waited for 195.077295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298597   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298623   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.298634   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.298639   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.301973   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.499177   95151 request.go:632] Waited for 196.369372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499245   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499253   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.499263   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.499271   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.503129   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.503622   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.503653   95151 pod_ready.go:82] duration metric: took 400.317222ms for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.503666   95151 pod_ready.go:39] duration metric: took 5.2018361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:06.503683   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:56:06.503735   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:56:06.519167   95151 api_server.go:72] duration metric: took 23.032149937s to wait for apiserver process to appear ...
	I1028 11:56:06.519193   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:56:06.519218   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:56:06.524148   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:56:06.524235   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:56:06.524247   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.524259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.524269   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.525138   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:56:06.525206   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:56:06.525222   95151 api_server.go:131] duration metric: took 6.021057ms to wait for apiserver health ...
	I1028 11:56:06.525232   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:56:06.698920   95151 request.go:632] Waited for 173.589854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699014   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699026   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.699037   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.699046   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.705719   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:06.711799   95151 system_pods.go:59] 24 kube-system pods found
	I1028 11:56:06.711826   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:06.711831   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:06.711834   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:06.711837   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:06.711840   95151 system_pods.go:61] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:06.711845   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:06.711849   95151 system_pods.go:61] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:06.711858   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:06.711864   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:06.711869   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:06.711877   95151 system_pods.go:61] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:06.711884   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:06.711893   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:06.711901   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:06.711906   95151 system_pods.go:61] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:06.711911   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:06.711917   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:06.711923   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:06.711926   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:06.711932   95151 system_pods.go:61] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:06.711935   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:06.711940   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:06.711943   95151 system_pods.go:61] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:06.711947   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:06.711955   95151 system_pods.go:74] duration metric: took 186.713107ms to wait for pod list to return data ...
	I1028 11:56:06.711967   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:56:06.899177   95151 request.go:632] Waited for 187.113111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899242   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.899250   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.899255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.902353   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.902463   95151 default_sa.go:45] found service account: "default"
	I1028 11:56:06.902477   95151 default_sa.go:55] duration metric: took 190.499796ms for default service account to be created ...
	I1028 11:56:06.902489   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:56:07.098925   95151 request.go:632] Waited for 196.358925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099006   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099015   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.099027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.099034   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.104802   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:56:07.111244   95151 system_pods.go:86] 24 kube-system pods found
	I1028 11:56:07.111271   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:07.111276   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:07.111280   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:07.111284   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:07.111287   95151 system_pods.go:89] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:07.111292   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:07.111296   95151 system_pods.go:89] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:07.111301   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:07.111306   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:07.111312   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:07.111320   95151 system_pods.go:89] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:07.111326   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:07.111336   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:07.111342   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:07.111348   95151 system_pods.go:89] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:07.111354   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:07.111358   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:07.111364   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:07.111368   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:07.111374   95151 system_pods.go:89] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:07.111377   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:07.111386   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:07.111391   95151 system_pods.go:89] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:07.111394   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:07.111402   95151 system_pods.go:126] duration metric: took 208.905709ms to wait for k8s-apps to be running ...
	I1028 11:56:07.111413   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:56:07.111468   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:56:07.126987   95151 system_svc.go:56] duration metric: took 15.565787ms WaitForService to wait for kubelet
	I1028 11:56:07.127011   95151 kubeadm.go:582] duration metric: took 23.639999996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:56:07.127031   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:56:07.298754   95151 request.go:632] Waited for 171.640481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298832   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298839   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.298848   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.298857   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.302715   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:07.303776   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303797   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303807   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303810   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303814   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303817   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303821   95151 node_conditions.go:105] duration metric: took 176.784967ms to run NodePressure ...
	I1028 11:56:07.303834   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:56:07.303857   95151 start.go:255] writing updated cluster config ...
	I1028 11:56:07.304142   95151 ssh_runner.go:195] Run: rm -f paused
	I1028 11:56:07.355822   95151 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:56:07.357678   95151 out.go:177] * Done! kubectl is now configured to use "ha-273199" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.793236450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116787793209924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f61211f2-fb53-45cf-aefe-9ae02af7d4a9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.793779452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=010e5907-3464-473f-bb82-047cadf23995 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.793879718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=010e5907-3464-473f-bb82-047cadf23995 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.794191219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=010e5907-3464-473f-bb82-047cadf23995 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.843734508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5299a21-9772-4b40-89ba-72045c5579ae name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.843842414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5299a21-9772-4b40-89ba-72045c5579ae name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.844940174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b486f53-9303-4054-9113-5d4d05741387 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.845783751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116787845759716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b486f53-9303-4054-9113-5d4d05741387 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.846468138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83bb3207-5342-4839-b902-2720ab45796a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.846615327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83bb3207-5342-4839-b902-2720ab45796a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.846957066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83bb3207-5342-4839-b902-2720ab45796a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.882637584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88894369-d118-42fd-aba6-b5cfe7389a0c name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.882741413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88894369-d118-42fd-aba6-b5cfe7389a0c name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.883925326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4ab86a9-9338-4cb7-8e24-cf0438d35de5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.884378359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116787884356657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4ab86a9-9338-4cb7-8e24-cf0438d35de5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.884875566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef0159bc-7275-48c4-a5f2-0e67b9f1ef55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.884926832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef0159bc-7275-48c4-a5f2-0e67b9f1ef55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.885287573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef0159bc-7275-48c4-a5f2-0e67b9f1ef55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.919759397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59b51008-b56f-4122-94ee-ef3972b470b1 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.919832296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59b51008-b56f-4122-94ee-ef3972b470b1 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.921147852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2545fa3e-5853-4ef6-9cf5-33082a35e163 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.921558258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116787921538766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2545fa3e-5853-4ef6-9cf5-33082a35e163 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.922103565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef022826-d9ae-4b58-8021-b74e4dc289e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.922152546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef022826-d9ae-4b58-8021-b74e4dc289e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:47 ha-273199 crio[663]: time="2024-10-28 11:59:47.922392354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef022826-d9ae-4b58-8021-b74e4dc289e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	609ad54d4add2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5aab280940ba8       busybox-7dff88458-fnvwg
	fe58f2eaad87a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   257fc926b128d       coredns-7c65d6cfc9-hc26g
	74749e3632776       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a33a6d6dc5f66       coredns-7c65d6cfc9-7rnn9
	72c80fedf6643       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   53cd5c1c15675       storage-provisioner
	e082051f544c2       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   ef059ce23254d       kindnet-2gldl
	82471ae5ddf92       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   0cbf13a852cd2       kube-proxy-tr5vf
	39409b2e85012       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   cc7ea362731d6       kube-vip-ha-273199
	8b350f0da3b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   43ab783eb9151       kube-apiserver-ha-273199
	07773cb979d8f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2541db65f40ae       kube-controller-manager-ha-273199
	6fb4822a5b791       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   737b1cd7f74b4       kube-scheduler-ha-273199
	ec2df51593c58       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   32e3db6238d43       etcd-ha-273199
	
	
	==> coredns [74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d] <==
	[INFO] 10.244.1.2:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227007s
	[INFO] 10.244.1.2:38770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002925427s
	[INFO] 10.244.1.2:48927 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147448s
	[INFO] 10.244.1.2:38077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192376s
	[INFO] 10.244.0.4:54968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.0.4:57503 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110201s
	[INFO] 10.244.0.4:34291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061267s
	[INFO] 10.244.0.4:50921 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128077s
	[INFO] 10.244.0.4:39917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062677s
	[INFO] 10.244.2.2:60183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014203s
	[INFO] 10.244.2.2:40291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692422s
	[INFO] 10.244.2.2:46423 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149349s
	[INFO] 10.244.2.2:54634 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124106s
	[INFO] 10.244.1.2:50363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142769s
	[INFO] 10.244.1.2:35968 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225253s
	[INFO] 10.244.1.2:45996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107605s
	[INFO] 10.244.1.2:49921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093269s
	[INFO] 10.244.0.4:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012322s
	[INFO] 10.244.2.2:52722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002033s
	[INFO] 10.244.2.2:57825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011394s
	[INFO] 10.244.1.2:34495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211997s
	[INFO] 10.244.1.2:44656 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288144s
	[INFO] 10.244.0.4:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021258s
	[INFO] 10.244.2.2:60661 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153264s
	[INFO] 10.244.2.2:45534 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088052s
	
	
	==> coredns [fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce] <==
	[INFO] 10.244.0.4:38250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001327706s
	[INFO] 10.244.0.4:43351 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111923s
	[INFO] 10.244.0.4:51500 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001177333s
	[INFO] 10.244.2.2:48939 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124212s
	[INFO] 10.244.2.2:50808 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000124833s
	[INFO] 10.244.1.2:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190204s
	[INFO] 10.244.0.4:58247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672481s
	[INFO] 10.244.0.4:37091 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169137s
	[INFO] 10.244.0.4:48641 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098052s
	[INFO] 10.244.2.2:54836 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104545s
	[INFO] 10.244.2.2:40126 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854336s
	[INFO] 10.244.2.2:52894 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163896s
	[INFO] 10.244.2.2:35333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230414s
	[INFO] 10.244.0.4:41974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152869s
	[INFO] 10.244.0.4:36380 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062783s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048517s
	[INFO] 10.244.2.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018024s
	[INFO] 10.244.2.2:38193 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125455s
	[INFO] 10.244.1.2:33651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271979s
	[INFO] 10.244.1.2:35705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159131s
	[INFO] 10.244.0.4:48176 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111737s
	[INFO] 10.244.0.4:38598 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127464s
	[INFO] 10.244.0.4:32940 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141046s
	[INFO] 10.244.2.2:43181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212895s
	[INFO] 10.244.2.2:43421 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090558s
	
	
	==> describe nodes <==
	Name:               ha-273199
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:53:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-273199
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c1c6593d854f8388a3b75213b790ab
	  System UUID:                c4c1c659-3d85-4f83-88a3-b75213b790ab
	  Boot ID:                    1bfb0ff9-0991-4c08-97cb-b1b218815106
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-7rnn9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 coredns-7c65d6cfc9-hc26g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 etcd-ha-273199                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-2gldl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m6s
	  kube-system                 kube-apiserver-ha-273199             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-273199    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-tr5vf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-ha-273199             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-273199                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m2s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s                  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s                  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s                  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  NodeReady                5m52s                  kubelet          Node ha-273199 status is now: NodeReady
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	
	
	Name:               ha-273199-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:54:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:57:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-273199-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d185c9b1be043df924a5dc234d517bb
	  System UUID:                2d185c9b-1be0-43df-924a-5dc234d517bb
	  Boot ID:                    707068c3-7da2-4705-9622-6b089ce29c40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8tvkk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-273199-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m20s
	  kube-system                 kindnet-ts2mp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m22s
	  kube-system                 kube-apiserver-ha-273199-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-273199-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-nrzn7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-ha-273199-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-vip-ha-273199-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-273199-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-273199-m02 status is now: NodeNotReady
	
	
	Name:               ha-273199-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:55:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-273199-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d112805c85f46e58297ecf352114eb9
	  System UUID:                1d112805-c85f-46e5-8297-ecf352114eb9
	  Boot ID:                    07c61f8b-a2c4-4310-b7a1-41ac039bba9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g54mk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-273199-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-rz4mf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-273199-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-273199-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-9g4h7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-273199-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-273199-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-273199-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	
	
	Name:               ha-273199-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_56_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:57:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-273199-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43b84cefa5dd4131ade4071e67ae7a87
	  System UUID:                43b84cef-a5dd-4131-ade4-071e67ae7a87
	  Boot ID:                    bfbeda91-dd05-4597-adc6-b479c1c2dd66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bx2hn       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-7pzm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-273199-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-273199-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.737052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.789015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.644647] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.122482] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.184258] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.115821] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.235503] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.601274] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.514017] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.057056] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251877] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.071885] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.801233] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.354632] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 11:54] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3] <==
	{"level":"warn","ts":"2024-10-28T11:59:48.145298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.151857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.155083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.165195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.181221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.200074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.204213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.212747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.217138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.228292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.238030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.250346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.264963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.268199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.276547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.282729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.292221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.296562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.299918Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.300145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.302607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.304729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.309285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.313473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:48.321848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:59:48 up 6 min,  0 users,  load average: 0.35, 0.34, 0.18
	Linux ha-273199 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9] <==
	I1028 11:59:16.530799       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530030       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:26.530150       1 main.go:300] handling current node
	I1028 11:59:26.530184       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:26.530202       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:26.530461       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:26.530495       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530632       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:26.530655       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:36.531055       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:36.531126       1 main.go:300] handling current node
	I1028 11:59:36.531149       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:36.531155       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:36.531406       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:36.531425       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:36.531556       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:36.531571       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.530412       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:46.530590       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.531165       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:46.531265       1 main.go:300] handling current node
	I1028 11:59:46.531299       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:46.531355       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:46.531643       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:46.531670       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56] <==
	I1028 11:53:37.479954       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:53:38.366724       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:53:38.396043       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:53:38.413224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:53:42.979540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:53:43.083644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:55:40.973661       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.973734       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.741µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:55:40.974882       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.976075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.977370       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.890629ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1028 11:56:12.749438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33980: use of closed network connection
	E1028 11:56:12.923851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33996: use of closed network connection
	E1028 11:56:13.281780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34038: use of closed network connection
	E1028 11:56:13.456851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34054: use of closed network connection
	E1028 11:56:13.625829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34076: use of closed network connection
	E1028 11:56:13.792266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34090: use of closed network connection
	E1028 11:56:13.965533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34100: use of closed network connection
	E1028 11:56:14.136211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34124: use of closed network connection
	E1028 11:56:14.414608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34162: use of closed network connection
	E1028 11:56:14.591367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34188: use of closed network connection
	E1028 11:56:14.760347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34200: use of closed network connection
	E1028 11:56:14.922486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34206: use of closed network connection
	E1028 11:56:15.092625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34220: use of closed network connection
	E1028 11:56:15.260557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34244: use of closed network connection
	
	
	==> kube-controller-manager [07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df] <==
	I1028 11:56:41.255363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.287882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.504368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.718228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m03"
	I1028 11:56:41.866442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.227080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-273199-m04"
	I1028 11:56:42.253788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.533477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199"
	I1028 11:56:43.703600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:43.733191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.386515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.495725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:51.380862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:57:01.650243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:02.239477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:12.162277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:58:02.262145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.262722       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:58:02.289111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.371759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.617397ms"
	I1028 11:58:02.371873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.712µs"
	I1028 11:58:03.751638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:07.489074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	
	
	==> kube-proxy [82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:53:45.160274       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:53:45.173814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E1028 11:53:45.173942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:53:45.205451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:53:45.205509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:53:45.205540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:53:45.207870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:53:45.208259       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:53:45.208291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:53:45.209606       1 config.go:328] "Starting node config controller"
	I1028 11:53:45.209665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:53:45.210054       1 config.go:199] "Starting service config controller"
	I1028 11:53:45.210078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:53:45.210110       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:53:45.210127       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:53:45.310570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:53:45.310626       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:53:45.310585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c] <==
	I1028 11:53:39.113228       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:55:40.277591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.278684       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 164d41fa-0fff-4f4c-8f09-011e57fc1094(kube-system/kindnet-whfj9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-whfj9"
	E1028 11:55:40.278764       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-whfj9"
	I1028 11:55:40.278832       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.294817       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.294939       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 88c92727-3ef1-4b38-9df5-771fe9917f5e(kube-system/kube-proxy-qxpt8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qxpt8"
	E1028 11:55:40.294972       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-qxpt8"
	I1028 11:55:40.295047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.307670       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.307788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4899b8e5-73ce-487e-81ca-f833a1dc900b(kube-system/kube-proxy-9g4h7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9g4h7"
	E1028 11:55:40.307822       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-9g4h7"
	I1028 11:55:40.307855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.324371       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:40.324469       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6b2fd99-538e-49be-bda5-b0e1c9edb32c(kube-system/kindnet-4bn7m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4bn7m"
	E1028 11:55:40.324505       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-4bn7m"
	I1028 11:55:40.324540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:42.324511       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:55:42.324607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33ad0e92-e29c-4e54-8593-7cffd69fd439(kube-system/kindnet-rz4mf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rz4mf"
	E1028 11:55:42.324641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-rz4mf"
	I1028 11:55:42.324700       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:56:08.295366       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	E1028 11:56:08.295536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e89846f-39f0-42a4-b343-0ae004376bc7(default/busybox-7dff88458-fnvwg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fnvwg"
	E1028 11:56:08.295580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" pod="default/busybox-7dff88458-fnvwg"
	I1028 11:56:08.295605       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	
	
	==> kubelet <==
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351743    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351767    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353760    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353814    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356841    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356866    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358886    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358944    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.361731    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.362240    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363560    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363977    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.290570    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366212    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366235    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367653    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367685    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr: (4.038576081s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.259856378s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m03_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-273199 node start m02 -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:52:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:52:57.905238   95151 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:57.905348   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905358   95151 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:57.905363   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905525   95151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:57.906087   95151 out.go:352] Setting JSON to false
	I1028 11:52:57.907021   95151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5728,"bootTime":1730110650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:57.907126   95151 start.go:139] virtualization: kvm guest
	I1028 11:52:57.909586   95151 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:57.911228   95151 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:57.911224   95151 notify.go:220] Checking for updates...
	I1028 11:52:57.912881   95151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:57.914463   95151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:57.915977   95151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:57.917406   95151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:57.918858   95151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:57.920382   95151 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:57.956004   95151 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:52:57.957439   95151 start.go:297] selected driver: kvm2
	I1028 11:52:57.957454   95151 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:52:57.957467   95151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:57.958216   95151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.958309   95151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:52:57.973197   95151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:52:57.973244   95151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:52:57.973498   95151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:52:57.973536   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:52:57.973597   95151 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:52:57.973608   95151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:52:57.973673   95151 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:57.973775   95151 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.975793   95151 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 11:52:57.977410   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:52:57.977445   95151 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:52:57.977454   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:52:57.977554   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:52:57.977568   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:52:57.977888   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:52:57.977914   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json: {Name:mk29535b2b544db75ec78b7c2f3618df28a4affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:52:57.978059   95151 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:52:57.978100   95151 start.go:364] duration metric: took 24.255µs to acquireMachinesLock for "ha-273199"
	I1028 11:52:57.978122   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:52:57.978188   95151 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:52:57.980939   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:52:57.981099   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:57.981147   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:57.995094   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I1028 11:52:57.995525   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:57.996093   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:52:57.996110   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:57.996513   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:57.996734   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:52:57.996948   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:52:57.997198   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:52:57.997236   95151 client.go:168] LocalClient.Create starting
	I1028 11:52:57.997293   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:52:57.997346   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997371   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997456   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:52:57.997488   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997509   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997543   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:52:57.997564   95151 main.go:141] libmachine: (ha-273199) Calling .PreCreateCheck
	I1028 11:52:57.998077   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:52:57.998575   95151 main.go:141] libmachine: Creating machine...
	I1028 11:52:57.998591   95151 main.go:141] libmachine: (ha-273199) Calling .Create
	I1028 11:52:57.998762   95151 main.go:141] libmachine: (ha-273199) Creating KVM machine...
	I1028 11:52:58.000213   95151 main.go:141] libmachine: (ha-273199) DBG | found existing default KVM network
	I1028 11:52:58.000923   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.000765   95174 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045e0}
	I1028 11:52:58.000944   95151 main.go:141] libmachine: (ha-273199) DBG | created network xml: 
	I1028 11:52:58.000958   95151 main.go:141] libmachine: (ha-273199) DBG | <network>
	I1028 11:52:58.000965   95151 main.go:141] libmachine: (ha-273199) DBG |   <name>mk-ha-273199</name>
	I1028 11:52:58.000975   95151 main.go:141] libmachine: (ha-273199) DBG |   <dns enable='no'/>
	I1028 11:52:58.000981   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.000999   95151 main.go:141] libmachine: (ha-273199) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:52:58.001012   95151 main.go:141] libmachine: (ha-273199) DBG |     <dhcp>
	I1028 11:52:58.001028   95151 main.go:141] libmachine: (ha-273199) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:52:58.001044   95151 main.go:141] libmachine: (ha-273199) DBG |     </dhcp>
	I1028 11:52:58.001076   95151 main.go:141] libmachine: (ha-273199) DBG |   </ip>
	I1028 11:52:58.001096   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.001107   95151 main.go:141] libmachine: (ha-273199) DBG | </network>
	I1028 11:52:58.001116   95151 main.go:141] libmachine: (ha-273199) DBG | 
	I1028 11:52:58.006306   95151 main.go:141] libmachine: (ha-273199) DBG | trying to create private KVM network mk-ha-273199 192.168.39.0/24...
	I1028 11:52:58.068689   95151 main.go:141] libmachine: (ha-273199) DBG | private KVM network mk-ha-273199 192.168.39.0/24 created
	I1028 11:52:58.068733   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.068675   95174 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.068745   95151 main.go:141] libmachine: (ha-273199) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.068764   95151 main.go:141] libmachine: (ha-273199) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:52:58.068841   95151 main.go:141] libmachine: (ha-273199) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:52:58.350673   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.350525   95174 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa...
	I1028 11:52:58.570859   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570715   95174 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk...
	I1028 11:52:58.570893   95151 main.go:141] libmachine: (ha-273199) DBG | Writing magic tar header
	I1028 11:52:58.570902   95151 main.go:141] libmachine: (ha-273199) DBG | Writing SSH key tar header
	I1028 11:52:58.570910   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570831   95174 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.570926   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199
	I1028 11:52:58.570998   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 (perms=drwx------)
	I1028 11:52:58.571026   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:52:58.571056   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:52:58.571074   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.571082   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:52:58.571094   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:52:58.571102   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:52:58.571107   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home
	I1028 11:52:58.571113   95151 main.go:141] libmachine: (ha-273199) DBG | Skipping /home - not owner
	I1028 11:52:58.571126   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:52:58.571143   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:52:58.571178   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:52:58.571193   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:52:58.571219   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:58.572260   95151 main.go:141] libmachine: (ha-273199) define libvirt domain using xml: 
	I1028 11:52:58.572286   95151 main.go:141] libmachine: (ha-273199) <domain type='kvm'>
	I1028 11:52:58.572294   95151 main.go:141] libmachine: (ha-273199)   <name>ha-273199</name>
	I1028 11:52:58.572299   95151 main.go:141] libmachine: (ha-273199)   <memory unit='MiB'>2200</memory>
	I1028 11:52:58.572304   95151 main.go:141] libmachine: (ha-273199)   <vcpu>2</vcpu>
	I1028 11:52:58.572308   95151 main.go:141] libmachine: (ha-273199)   <features>
	I1028 11:52:58.572313   95151 main.go:141] libmachine: (ha-273199)     <acpi/>
	I1028 11:52:58.572324   95151 main.go:141] libmachine: (ha-273199)     <apic/>
	I1028 11:52:58.572330   95151 main.go:141] libmachine: (ha-273199)     <pae/>
	I1028 11:52:58.572339   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572346   95151 main.go:141] libmachine: (ha-273199)   </features>
	I1028 11:52:58.572356   95151 main.go:141] libmachine: (ha-273199)   <cpu mode='host-passthrough'>
	I1028 11:52:58.572364   95151 main.go:141] libmachine: (ha-273199)   
	I1028 11:52:58.572375   95151 main.go:141] libmachine: (ha-273199)   </cpu>
	I1028 11:52:58.572382   95151 main.go:141] libmachine: (ha-273199)   <os>
	I1028 11:52:58.572393   95151 main.go:141] libmachine: (ha-273199)     <type>hvm</type>
	I1028 11:52:58.572409   95151 main.go:141] libmachine: (ha-273199)     <boot dev='cdrom'/>
	I1028 11:52:58.572428   95151 main.go:141] libmachine: (ha-273199)     <boot dev='hd'/>
	I1028 11:52:58.572442   95151 main.go:141] libmachine: (ha-273199)     <bootmenu enable='no'/>
	I1028 11:52:58.572452   95151 main.go:141] libmachine: (ha-273199)   </os>
	I1028 11:52:58.572462   95151 main.go:141] libmachine: (ha-273199)   <devices>
	I1028 11:52:58.572470   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='cdrom'>
	I1028 11:52:58.572481   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/boot2docker.iso'/>
	I1028 11:52:58.572489   95151 main.go:141] libmachine: (ha-273199)       <target dev='hdc' bus='scsi'/>
	I1028 11:52:58.572513   95151 main.go:141] libmachine: (ha-273199)       <readonly/>
	I1028 11:52:58.572529   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572544   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='disk'>
	I1028 11:52:58.572557   95151 main.go:141] libmachine: (ha-273199)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:52:58.572570   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk'/>
	I1028 11:52:58.572580   95151 main.go:141] libmachine: (ha-273199)       <target dev='hda' bus='virtio'/>
	I1028 11:52:58.572589   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572599   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572625   95151 main.go:141] libmachine: (ha-273199)       <source network='mk-ha-273199'/>
	I1028 11:52:58.572647   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572659   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572669   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572681   95151 main.go:141] libmachine: (ha-273199)       <source network='default'/>
	I1028 11:52:58.572689   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572698   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572708   95151 main.go:141] libmachine: (ha-273199)     <serial type='pty'>
	I1028 11:52:58.572719   95151 main.go:141] libmachine: (ha-273199)       <target port='0'/>
	I1028 11:52:58.572747   95151 main.go:141] libmachine: (ha-273199)     </serial>
	I1028 11:52:58.572759   95151 main.go:141] libmachine: (ha-273199)     <console type='pty'>
	I1028 11:52:58.572769   95151 main.go:141] libmachine: (ha-273199)       <target type='serial' port='0'/>
	I1028 11:52:58.572780   95151 main.go:141] libmachine: (ha-273199)     </console>
	I1028 11:52:58.572789   95151 main.go:141] libmachine: (ha-273199)     <rng model='virtio'>
	I1028 11:52:58.572801   95151 main.go:141] libmachine: (ha-273199)       <backend model='random'>/dev/random</backend>
	I1028 11:52:58.572815   95151 main.go:141] libmachine: (ha-273199)     </rng>
	I1028 11:52:58.572825   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572833   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572844   95151 main.go:141] libmachine: (ha-273199)   </devices>
	I1028 11:52:58.572852   95151 main.go:141] libmachine: (ha-273199) </domain>
	I1028 11:52:58.572861   95151 main.go:141] libmachine: (ha-273199) 
	I1028 11:52:58.577134   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:42:ba:53 in network default
	I1028 11:52:58.577786   95151 main.go:141] libmachine: (ha-273199) Ensuring networks are active...
	I1028 11:52:58.577821   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:58.578546   95151 main.go:141] libmachine: (ha-273199) Ensuring network default is active
	I1028 11:52:58.578856   95151 main.go:141] libmachine: (ha-273199) Ensuring network mk-ha-273199 is active
	I1028 11:52:58.579358   95151 main.go:141] libmachine: (ha-273199) Getting domain xml...
	I1028 11:52:58.580118   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:59.782570   95151 main.go:141] libmachine: (ha-273199) Waiting to get IP...
	I1028 11:52:59.783496   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:59.783901   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:52:59.783927   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:59.783876   95174 retry.go:31] will retry after 311.934457ms: waiting for machine to come up
	I1028 11:53:00.097445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.097916   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.097939   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.097877   95174 retry.go:31] will retry after 388.795801ms: waiting for machine to come up
	I1028 11:53:00.488689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.489130   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.489162   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.489047   95174 retry.go:31] will retry after 341.439374ms: waiting for machine to come up
	I1028 11:53:00.831825   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.832326   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.832354   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.832259   95174 retry.go:31] will retry after 537.545151ms: waiting for machine to come up
	I1028 11:53:01.371089   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.371572   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.371603   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.371503   95174 retry.go:31] will retry after 575.351282ms: waiting for machine to come up
	I1028 11:53:01.948343   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.948756   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.948778   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.948711   95174 retry.go:31] will retry after 886.467527ms: waiting for machine to come up
	I1028 11:53:02.836558   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:02.836900   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:02.836930   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:02.836853   95174 retry.go:31] will retry after 1.015980502s: waiting for machine to come up
	I1028 11:53:03.854959   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:03.855391   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:03.855437   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:03.855271   95174 retry.go:31] will retry after 1.050486499s: waiting for machine to come up
	I1028 11:53:04.907614   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:04.908201   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:04.908229   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:04.908145   95174 retry.go:31] will retry after 1.491832435s: waiting for machine to come up
	I1028 11:53:06.401910   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:06.402491   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:06.402518   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:06.402445   95174 retry.go:31] will retry after 1.441307708s: waiting for machine to come up
	I1028 11:53:07.846099   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:07.846578   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:07.846619   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:07.846526   95174 retry.go:31] will retry after 2.820165032s: waiting for machine to come up
	I1028 11:53:10.670238   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:10.670586   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:10.670616   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:10.670541   95174 retry.go:31] will retry after 2.961295833s: waiting for machine to come up
	I1028 11:53:13.633316   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:13.633782   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:13.633805   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:13.633732   95174 retry.go:31] will retry after 3.308614209s: waiting for machine to come up
	I1028 11:53:16.945522   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:16.946011   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:16.946110   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:16.946030   95174 retry.go:31] will retry after 3.990479431s: waiting for machine to come up
	I1028 11:53:20.937712   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938109   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has current primary IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938130   95151 main.go:141] libmachine: (ha-273199) Found IP for machine: 192.168.39.208
	I1028 11:53:20.938142   95151 main.go:141] libmachine: (ha-273199) Reserving static IP address...
	I1028 11:53:20.938499   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find host DHCP lease matching {name: "ha-273199", mac: "52:54:00:22:d4:52", ip: "192.168.39.208"} in network mk-ha-273199
	I1028 11:53:21.008969   95151 main.go:141] libmachine: (ha-273199) DBG | Getting to WaitForSSH function...
	I1028 11:53:21.008999   95151 main.go:141] libmachine: (ha-273199) Reserved static IP address: 192.168.39.208
	I1028 11:53:21.009011   95151 main.go:141] libmachine: (ha-273199) Waiting for SSH to be available...
	I1028 11:53:21.011668   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012047   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.012076   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012164   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH client type: external
	I1028 11:53:21.012204   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa (-rw-------)
	I1028 11:53:21.012233   95151 main.go:141] libmachine: (ha-273199) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:53:21.012252   95151 main.go:141] libmachine: (ha-273199) DBG | About to run SSH command:
	I1028 11:53:21.012267   95151 main.go:141] libmachine: (ha-273199) DBG | exit 0
	I1028 11:53:21.139407   95151 main.go:141] libmachine: (ha-273199) DBG | SSH cmd err, output: <nil>: 
	I1028 11:53:21.139608   95151 main.go:141] libmachine: (ha-273199) KVM machine creation complete!
	I1028 11:53:21.140109   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:21.140683   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.140882   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.141093   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:53:21.141114   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:21.142660   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:53:21.142693   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:53:21.142699   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:53:21.142707   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.144906   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145252   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.145272   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145401   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.145570   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145700   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145811   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.145966   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.146169   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.146182   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:53:21.258494   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.258518   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:53:21.258525   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.261399   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.261893   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.261920   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.262110   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.262319   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262635   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.262887   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.263058   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.263068   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:53:21.376384   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:53:21.376474   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:53:21.376484   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:53:21.376495   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376737   95151 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 11:53:21.376768   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376959   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.379689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380146   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.380176   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380378   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.380584   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380744   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380879   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.381094   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.381292   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.381311   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 11:53:21.505313   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 11:53:21.505340   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.507973   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508308   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.508335   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508498   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.508627   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508764   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.509011   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.509180   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.509205   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:53:21.627427   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.627469   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:53:21.627526   95151 buildroot.go:174] setting up certificates
	I1028 11:53:21.627546   95151 provision.go:84] configureAuth start
	I1028 11:53:21.627563   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.627864   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:21.630491   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.630851   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.630879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.631007   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.633459   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.633874   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.633904   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.634035   95151 provision.go:143] copyHostCerts
	I1028 11:53:21.634064   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634109   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:53:21.634121   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634183   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:53:21.634289   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634308   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:53:21.634312   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634344   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:53:21.634423   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634439   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:53:21.634443   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634469   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:53:21.634525   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 11:53:21.941769   95151 provision.go:177] copyRemoteCerts
	I1028 11:53:21.941844   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:53:21.941871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.944301   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944588   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.944615   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944775   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.945004   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.945172   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.945312   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.028802   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:53:22.028910   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:53:22.051394   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:53:22.051457   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:53:22.072047   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:53:22.072099   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:53:22.092704   95151 provision.go:87] duration metric: took 465.141947ms to configureAuth
	I1028 11:53:22.092729   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:53:22.092901   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:22.092986   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.095606   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.095961   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.095988   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.096168   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.096372   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096528   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096655   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.096802   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.096954   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.096969   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:53:22.312757   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:53:22.312785   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:53:22.312806   95151 main.go:141] libmachine: (ha-273199) Calling .GetURL
	I1028 11:53:22.313992   95151 main.go:141] libmachine: (ha-273199) DBG | Using libvirt version 6000000
	I1028 11:53:22.316240   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316567   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.316595   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316828   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:53:22.316850   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:53:22.316861   95151 client.go:171] duration metric: took 24.31961411s to LocalClient.Create
	I1028 11:53:22.316914   95151 start.go:167] duration metric: took 24.319696986s to libmachine.API.Create "ha-273199"
	I1028 11:53:22.316928   95151 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 11:53:22.316942   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:53:22.316962   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.317200   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:53:22.317223   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.319445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320158   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.320178   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320347   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.320534   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.320674   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.320778   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.406034   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:53:22.409957   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:53:22.409983   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:53:22.410056   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:53:22.410194   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:53:22.410209   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:53:22.410362   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:53:22.418934   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:22.439625   95151 start.go:296] duration metric: took 122.683745ms for postStartSetup
	I1028 11:53:22.439684   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:22.440268   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.442702   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443017   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.443035   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443281   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:22.443438   95151 start.go:128] duration metric: took 24.465239541s to createHost
	I1028 11:53:22.443459   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.446282   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446621   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.446650   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446768   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.446935   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447095   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447222   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.447404   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.447574   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.447589   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:53:22.559751   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116402.538168741
	
	I1028 11:53:22.559780   95151 fix.go:216] guest clock: 1730116402.538168741
	I1028 11:53:22.559788   95151 fix.go:229] Guest: 2024-10-28 11:53:22.538168741 +0000 UTC Remote: 2024-10-28 11:53:22.443448629 +0000 UTC m=+24.575720280 (delta=94.720112ms)
	I1028 11:53:22.559821   95151 fix.go:200] guest clock delta is within tolerance: 94.720112ms
	I1028 11:53:22.559826   95151 start.go:83] releasing machines lock for "ha-273199", held for 24.581718789s
	I1028 11:53:22.559851   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.560134   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.562796   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563147   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.563185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563312   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563844   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563988   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.564076   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:53:22.564130   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.564190   95151 ssh_runner.go:195] Run: cat /version.json
	I1028 11:53:22.564216   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.566705   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.566929   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567041   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567064   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567296   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567390   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567416   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567469   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567580   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567668   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567738   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567794   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.567840   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567980   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.670647   95151 ssh_runner.go:195] Run: systemctl --version
	I1028 11:53:22.676078   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:53:22.830303   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:53:22.836224   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:53:22.836288   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:53:22.850695   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:53:22.850718   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:53:22.850775   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:53:22.865306   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:53:22.877632   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:53:22.877682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:53:22.889956   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:53:22.901677   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:53:23.007362   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:53:23.168538   95151 docker.go:233] disabling docker service ...
	I1028 11:53:23.168615   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:53:23.181374   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:53:23.192932   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:53:23.310662   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:53:23.424314   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:53:23.437058   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:53:23.453309   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:53:23.453391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.462468   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:53:23.462530   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.471391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.480284   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.489458   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:53:23.498558   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.507348   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.522430   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.531223   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:53:23.539417   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:53:23.539455   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:53:23.551001   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:53:23.559053   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:23.661360   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:53:23.745420   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:53:23.745500   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:53:23.749645   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:53:23.749737   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:53:23.753175   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:53:23.787639   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:53:23.787732   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.812312   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.837983   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:53:23.839279   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:23.841862   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842156   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:23.842185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842344   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:53:23.845848   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:23.857277   95151 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:53:23.857375   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:23.857429   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:23.885745   95151 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:53:23.885803   95151 ssh_runner.go:195] Run: which lz4
	I1028 11:53:23.889147   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:53:23.889231   95151 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:53:23.892744   95151 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:53:23.892778   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:53:24.999101   95151 crio.go:462] duration metric: took 1.10988801s to copy over tarball
	I1028 11:53:24.999192   95151 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:53:26.940236   95151 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.941006419s)
	I1028 11:53:26.940272   95151 crio.go:469] duration metric: took 1.941134954s to extract the tarball
	I1028 11:53:26.940283   95151 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:53:26.975750   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:27.015231   95151 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:53:27.015255   95151 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:53:27.015267   95151 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 11:53:27.015382   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:53:27.015466   95151 ssh_runner.go:195] Run: crio config
	I1028 11:53:27.056277   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:27.056302   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:27.056316   95151 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:53:27.056348   95151 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:53:27.056497   95151 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:53:27.056525   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:53:27.056581   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:53:27.072483   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:53:27.072593   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:53:27.072658   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:53:27.081034   95151 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:53:27.081092   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:53:27.089111   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:53:27.103592   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:53:27.118272   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:53:27.132197   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:53:27.146233   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:53:27.149485   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:27.160138   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:27.266620   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:53:27.282436   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 11:53:27.282457   95151 certs.go:194] generating shared ca certs ...
	I1028 11:53:27.282478   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.282670   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:53:27.282728   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:53:27.282741   95151 certs.go:256] generating profile certs ...
	I1028 11:53:27.282809   95151 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:53:27.282826   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt with IP's: []
	I1028 11:53:27.352056   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt ...
	I1028 11:53:27.352083   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt: {Name:mk85ba9e2d7e36c2dc386074345191c8f41db2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352257   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key ...
	I1028 11:53:27.352268   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key: {Name:mk9e399a746995b3286d90f34445304b7c10dcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352359   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602
	I1028 11:53:27.352376   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I1028 11:53:27.701864   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 ...
	I1028 11:53:27.701927   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602: {Name:mkd8347f84237c1adf80fa2979e2851e438e06db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702124   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 ...
	I1028 11:53:27.702141   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602: {Name:mk8022b5d8b42b8f2926882e2d9f76f284ea38fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702238   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:53:27.702318   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:53:27.702367   95151 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:53:27.702384   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt with IP's: []
	I1028 11:53:27.887171   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt ...
	I1028 11:53:27.887202   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt: {Name:mk8df5a7b5c3f3d68e29bbf5b564443cc1d6c268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887348   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key ...
	I1028 11:53:27.887359   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key: {Name:mk563997b82cf259c7f4075de274f929660222b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887428   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:53:27.887444   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:53:27.887455   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:53:27.887469   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:53:27.887479   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:53:27.887493   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:53:27.887505   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:53:27.887517   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:53:27.887565   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:53:27.887608   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:53:27.887618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:53:27.887660   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:53:27.887680   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:53:27.887702   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:53:27.887740   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:27.887767   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:53:27.887780   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:27.887797   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:53:27.888376   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:53:27.912711   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:53:27.933465   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:53:27.954641   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:53:27.975959   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:53:27.996205   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:53:28.020327   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:53:28.061582   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:53:28.089945   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:53:28.110791   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:53:28.131009   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:53:28.150891   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:53:28.165153   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:53:28.170365   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:53:28.179779   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183529   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183568   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.188718   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:53:28.197725   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:53:28.206747   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210524   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210567   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.215456   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:53:28.224449   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:53:28.233481   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237734   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237779   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.242623   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:53:28.251661   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:53:28.255167   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:53:28.255214   95151 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:53:28.255281   95151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:53:28.255311   95151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:53:28.288882   95151 cri.go:89] found id: ""
	I1028 11:53:28.288966   95151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:53:28.297523   95151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:53:28.306258   95151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:53:28.314625   95151 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:53:28.314641   95151 kubeadm.go:157] found existing configuration files:
	
	I1028 11:53:28.314676   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:53:28.322612   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:53:28.322668   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:53:28.330792   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:53:28.338690   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:53:28.338727   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:53:28.346773   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.354775   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:53:28.354815   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.362916   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:53:28.370667   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:53:28.370718   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:53:28.378723   95151 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:53:28.563600   95151 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:53:38.972007   95151 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:53:38.972072   95151 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:53:38.972185   95151 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:53:38.972293   95151 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:53:38.972416   95151 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:53:38.972521   95151 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:53:38.974416   95151 out.go:235]   - Generating certificates and keys ...
	I1028 11:53:38.974509   95151 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:53:38.974601   95151 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:53:38.974706   95151 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:53:38.974787   95151 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:53:38.974879   95151 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:53:38.974959   95151 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:53:38.975036   95151 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:53:38.975286   95151 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975365   95151 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:53:38.975516   95151 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975611   95151 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:53:38.975722   95151 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:53:38.975797   95151 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:53:38.975877   95151 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:53:38.975944   95151 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:53:38.976014   95151 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:53:38.976064   95151 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:53:38.976141   95151 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:53:38.976202   95151 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:53:38.976272   95151 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:53:38.976334   95151 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:53:38.977999   95151 out.go:235]   - Booting up control plane ...
	I1028 11:53:38.978106   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:53:38.978178   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:53:38.978240   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:53:38.978347   95151 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:53:38.978445   95151 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:53:38.978486   95151 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:53:38.978635   95151 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:53:38.978759   95151 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:53:38.978849   95151 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498504s
	I1028 11:53:38.978951   95151 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:53:38.979035   95151 kubeadm.go:310] [api-check] The API server is healthy after 5.77087672s
	I1028 11:53:38.979160   95151 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:53:38.979301   95151 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:53:38.979391   95151 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:53:38.979587   95151 kubeadm.go:310] [mark-control-plane] Marking the node ha-273199 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:53:38.979669   95151 kubeadm.go:310] [bootstrap-token] Using token: 2y659k.kh228wx7fnaw6qlw
	I1028 11:53:38.980850   95151 out.go:235]   - Configuring RBAC rules ...
	I1028 11:53:38.980953   95151 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:53:38.981063   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:53:38.981194   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:53:38.981315   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:53:38.981461   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:53:38.981577   95151 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:53:38.981701   95151 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:53:38.981766   95151 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:53:38.981845   95151 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:53:38.981853   95151 kubeadm.go:310] 
	I1028 11:53:38.981937   95151 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:53:38.981950   95151 kubeadm.go:310] 
	I1028 11:53:38.982070   95151 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:53:38.982082   95151 kubeadm.go:310] 
	I1028 11:53:38.982120   95151 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:53:38.982205   95151 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:53:38.982281   95151 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:53:38.982294   95151 kubeadm.go:310] 
	I1028 11:53:38.982369   95151 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:53:38.982381   95151 kubeadm.go:310] 
	I1028 11:53:38.982451   95151 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:53:38.982463   95151 kubeadm.go:310] 
	I1028 11:53:38.982538   95151 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:53:38.982640   95151 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:53:38.982741   95151 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:53:38.982752   95151 kubeadm.go:310] 
	I1028 11:53:38.982827   95151 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:53:38.982895   95151 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:53:38.982901   95151 kubeadm.go:310] 
	I1028 11:53:38.982972   95151 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983065   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:53:38.983084   95151 kubeadm.go:310] 	--control-plane 
	I1028 11:53:38.983090   95151 kubeadm.go:310] 
	I1028 11:53:38.983184   95151 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:53:38.983205   95151 kubeadm.go:310] 
	I1028 11:53:38.983290   95151 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983394   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:53:38.983404   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:38.983412   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:38.985768   95151 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:53:38.987136   95151 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:53:38.992611   95151 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:53:38.992633   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:53:39.010322   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:53:39.369131   95151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:53:39.369214   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199 minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=true
	I1028 11:53:39.369218   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:39.407348   95151 ops.go:34] apiserver oom_adj: -16
	I1028 11:53:39.512261   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.013130   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.512492   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.012760   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.512614   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.013105   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.513113   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.013197   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.130930   95151 kubeadm.go:1113] duration metric: took 3.761785969s to wait for elevateKubeSystemPrivileges
	I1028 11:53:43.130968   95151 kubeadm.go:394] duration metric: took 14.875757661s to StartCluster
	I1028 11:53:43.130992   95151 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.131082   95151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.131868   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.132066   95151 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.132080   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:53:43.132092   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:53:43.132110   95151 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:53:43.132191   95151 addons.go:69] Setting storage-provisioner=true in profile "ha-273199"
	I1028 11:53:43.132211   95151 addons.go:234] Setting addon storage-provisioner=true in "ha-273199"
	I1028 11:53:43.132226   95151 addons.go:69] Setting default-storageclass=true in profile "ha-273199"
	I1028 11:53:43.132243   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.132254   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.132263   95151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-273199"
	I1028 11:53:43.132656   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132704   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.132733   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132778   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.148009   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1028 11:53:43.148148   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I1028 11:53:43.148527   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.148654   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.149031   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149050   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149159   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149183   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149384   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149521   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149709   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.149923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.149968   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.152241   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.152594   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:53:43.153153   95151 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:53:43.153487   95151 addons.go:234] Setting addon default-storageclass=true in "ha-273199"
	I1028 11:53:43.153537   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.153923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.153966   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.165112   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I1028 11:53:43.165628   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.166122   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.166140   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.166447   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.166644   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.168390   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.168673   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1028 11:53:43.169162   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.169675   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.169697   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.170033   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.170484   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.170504   95151 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:53:43.170529   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.172043   95151 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.172062   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:53:43.172076   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.174879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175341   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.175404   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175532   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.175676   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.175782   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.175869   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.188178   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I1028 11:53:43.188778   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.189356   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.189374   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.189736   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.189945   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.191684   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.191903   95151 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.191914   95151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:53:43.191927   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.195100   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195553   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.195576   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195757   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.195929   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.196073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.196212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.240072   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:53:43.320825   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.357607   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.543521   95151 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:53:43.793100   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793126   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793180   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793204   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793468   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793490   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793520   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793527   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793535   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793541   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793554   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793572   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793581   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793594   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793790   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793822   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793830   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793837   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793798   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793900   95151 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:53:43.793919   95151 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:53:43.794073   95151 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:53:43.794085   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.794095   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.794103   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.805561   95151 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:53:43.806144   95151 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:53:43.806158   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.806166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.806169   95151 round_trippers.go:473]     Content-Type: application/json
	I1028 11:53:43.806171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.809243   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:53:43.809609   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.809624   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.809925   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.809942   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.809968   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.812285   95151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:53:43.813517   95151 addons.go:510] duration metric: took 681.412756ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:53:43.813552   95151 start.go:246] waiting for cluster config update ...
	I1028 11:53:43.813579   95151 start.go:255] writing updated cluster config ...
	I1028 11:53:43.815032   95151 out.go:201] 
	I1028 11:53:43.816430   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.816508   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.817974   95151 out.go:177] * Starting "ha-273199-m02" control-plane node in "ha-273199" cluster
	I1028 11:53:43.819185   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:43.819208   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:53:43.819300   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:53:43.819313   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:53:43.819381   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.819558   95151 start.go:360] acquireMachinesLock for ha-273199-m02: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:53:43.819623   95151 start.go:364] duration metric: took 33.288µs to acquireMachinesLock for "ha-273199-m02"
	I1028 11:53:43.819661   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.819740   95151 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:53:43.821273   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:53:43.821359   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.821393   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.836503   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1028 11:53:43.837015   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.837597   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.837620   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.837996   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.838155   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:53:43.838314   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:53:43.838482   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:53:43.838517   95151 client.go:168] LocalClient.Create starting
	I1028 11:53:43.838554   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:53:43.838592   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838613   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838664   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:53:43.838684   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838696   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838711   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:53:43.838718   95151 main.go:141] libmachine: (ha-273199-m02) Calling .PreCreateCheck
	I1028 11:53:43.838865   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:53:43.839217   95151 main.go:141] libmachine: Creating machine...
	I1028 11:53:43.839229   95151 main.go:141] libmachine: (ha-273199-m02) Calling .Create
	I1028 11:53:43.839340   95151 main.go:141] libmachine: (ha-273199-m02) Creating KVM machine...
	I1028 11:53:43.840585   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing default KVM network
	I1028 11:53:43.840677   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing private KVM network mk-ha-273199
	I1028 11:53:43.840819   95151 main.go:141] libmachine: (ha-273199-m02) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:43.840837   95151 main.go:141] libmachine: (ha-273199-m02) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:53:43.840944   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:43.840827   95521 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:43.841035   95151 main.go:141] libmachine: (ha-273199-m02) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:53:44.101967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.101844   95521 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa...
	I1028 11:53:44.215652   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215521   95521 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk...
	I1028 11:53:44.215686   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing magic tar header
	I1028 11:53:44.215700   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing SSH key tar header
	I1028 11:53:44.215717   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215655   95521 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:44.215805   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02
	I1028 11:53:44.215837   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:53:44.215846   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 (perms=drwx------)
	I1028 11:53:44.215856   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:53:44.215863   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:53:44.215873   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:53:44.215879   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:53:44.215889   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:53:44.215894   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:44.215903   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:44.215911   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:53:44.215919   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:53:44.215925   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:53:44.215930   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home
	I1028 11:53:44.215935   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Skipping /home - not owner
	I1028 11:53:44.216891   95151 main.go:141] libmachine: (ha-273199-m02) define libvirt domain using xml: 
	I1028 11:53:44.216918   95151 main.go:141] libmachine: (ha-273199-m02) <domain type='kvm'>
	I1028 11:53:44.216933   95151 main.go:141] libmachine: (ha-273199-m02)   <name>ha-273199-m02</name>
	I1028 11:53:44.216941   95151 main.go:141] libmachine: (ha-273199-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:53:44.216950   95151 main.go:141] libmachine: (ha-273199-m02)   <vcpu>2</vcpu>
	I1028 11:53:44.216957   95151 main.go:141] libmachine: (ha-273199-m02)   <features>
	I1028 11:53:44.216966   95151 main.go:141] libmachine: (ha-273199-m02)     <acpi/>
	I1028 11:53:44.216976   95151 main.go:141] libmachine: (ha-273199-m02)     <apic/>
	I1028 11:53:44.216983   95151 main.go:141] libmachine: (ha-273199-m02)     <pae/>
	I1028 11:53:44.216989   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.216999   95151 main.go:141] libmachine: (ha-273199-m02)   </features>
	I1028 11:53:44.217007   95151 main.go:141] libmachine: (ha-273199-m02)   <cpu mode='host-passthrough'>
	I1028 11:53:44.217034   95151 main.go:141] libmachine: (ha-273199-m02)   
	I1028 11:53:44.217056   95151 main.go:141] libmachine: (ha-273199-m02)   </cpu>
	I1028 11:53:44.217068   95151 main.go:141] libmachine: (ha-273199-m02)   <os>
	I1028 11:53:44.217079   95151 main.go:141] libmachine: (ha-273199-m02)     <type>hvm</type>
	I1028 11:53:44.217093   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='cdrom'/>
	I1028 11:53:44.217102   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='hd'/>
	I1028 11:53:44.217112   95151 main.go:141] libmachine: (ha-273199-m02)     <bootmenu enable='no'/>
	I1028 11:53:44.217123   95151 main.go:141] libmachine: (ha-273199-m02)   </os>
	I1028 11:53:44.217133   95151 main.go:141] libmachine: (ha-273199-m02)   <devices>
	I1028 11:53:44.217140   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='cdrom'>
	I1028 11:53:44.217157   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/boot2docker.iso'/>
	I1028 11:53:44.217172   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:53:44.217183   95151 main.go:141] libmachine: (ha-273199-m02)       <readonly/>
	I1028 11:53:44.217196   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217208   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='disk'>
	I1028 11:53:44.217219   95151 main.go:141] libmachine: (ha-273199-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:53:44.217231   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk'/>
	I1028 11:53:44.217243   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:53:44.217254   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217268   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217279   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='mk-ha-273199'/>
	I1028 11:53:44.217289   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217297   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217306   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217311   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='default'/>
	I1028 11:53:44.217318   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217327   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217340   95151 main.go:141] libmachine: (ha-273199-m02)     <serial type='pty'>
	I1028 11:53:44.217349   95151 main.go:141] libmachine: (ha-273199-m02)       <target port='0'/>
	I1028 11:53:44.217361   95151 main.go:141] libmachine: (ha-273199-m02)     </serial>
	I1028 11:53:44.217372   95151 main.go:141] libmachine: (ha-273199-m02)     <console type='pty'>
	I1028 11:53:44.217382   95151 main.go:141] libmachine: (ha-273199-m02)       <target type='serial' port='0'/>
	I1028 11:53:44.217390   95151 main.go:141] libmachine: (ha-273199-m02)     </console>
	I1028 11:53:44.217400   95151 main.go:141] libmachine: (ha-273199-m02)     <rng model='virtio'>
	I1028 11:53:44.217420   95151 main.go:141] libmachine: (ha-273199-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:53:44.217438   95151 main.go:141] libmachine: (ha-273199-m02)     </rng>
	I1028 11:53:44.217448   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217460   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217472   95151 main.go:141] libmachine: (ha-273199-m02)   </devices>
	I1028 11:53:44.217481   95151 main.go:141] libmachine: (ha-273199-m02) </domain>
	I1028 11:53:44.217489   95151 main.go:141] libmachine: (ha-273199-m02) 
	I1028 11:53:44.223932   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:5f:41:a3 in network default
	I1028 11:53:44.224544   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:44.224583   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring networks are active...
	I1028 11:53:44.225374   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network default is active
	I1028 11:53:44.225816   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network mk-ha-273199 is active
	I1028 11:53:44.226251   95151 main.go:141] libmachine: (ha-273199-m02) Getting domain xml...
	I1028 11:53:44.227023   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:45.439147   95151 main.go:141] libmachine: (ha-273199-m02) Waiting to get IP...
	I1028 11:53:45.440088   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.440554   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.440583   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.440482   95521 retry.go:31] will retry after 269.373557ms: waiting for machine to come up
	I1028 11:53:45.712000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.712443   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.712474   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.712389   95521 retry.go:31] will retry after 298.904949ms: waiting for machine to come up
	I1028 11:53:46.012797   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.013174   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.013203   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.013118   95521 retry.go:31] will retry after 446.110397ms: waiting for machine to come up
	I1028 11:53:46.460766   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.461220   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.461245   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.461168   95521 retry.go:31] will retry after 398.131323ms: waiting for machine to come up
	I1028 11:53:46.860852   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.861266   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.861297   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.861218   95521 retry.go:31] will retry after 575.124652ms: waiting for machine to come up
	I1028 11:53:47.437756   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:47.438185   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:47.438208   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:47.438138   95521 retry.go:31] will retry after 828.228762ms: waiting for machine to come up
	I1028 11:53:48.267451   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:48.267942   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:48.267968   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:48.267911   95521 retry.go:31] will retry after 1.143938031s: waiting for machine to come up
	I1028 11:53:49.414967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:49.415400   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:49.415424   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:49.415361   95521 retry.go:31] will retry after 1.300605887s: waiting for machine to come up
	I1028 11:53:50.717749   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:50.718139   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:50.718173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:50.718072   95521 retry.go:31] will retry after 1.594414229s: waiting for machine to come up
	I1028 11:53:52.314529   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:52.314977   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:52.315000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:52.314931   95521 retry.go:31] will retry after 1.837671448s: waiting for machine to come up
	I1028 11:53:54.154075   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:54.154455   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:54.154488   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:54.154386   95521 retry.go:31] will retry after 2.115441874s: waiting for machine to come up
	I1028 11:53:56.272674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:56.273183   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:56.273216   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:56.273084   95521 retry.go:31] will retry after 3.620483706s: waiting for machine to come up
	I1028 11:53:59.894801   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:59.895232   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:59.895260   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:59.895175   95521 retry.go:31] will retry after 3.99432381s: waiting for machine to come up
	I1028 11:54:03.891608   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892071   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892098   95151 main.go:141] libmachine: (ha-273199-m02) Found IP for machine: 192.168.39.225
	I1028 11:54:03.892108   95151 main.go:141] libmachine: (ha-273199-m02) Reserving static IP address...
	I1028 11:54:03.892498   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find host DHCP lease matching {name: "ha-273199-m02", mac: "52:54:00:ac:c5:96", ip: "192.168.39.225"} in network mk-ha-273199
	I1028 11:54:03.966695   95151 main.go:141] libmachine: (ha-273199-m02) Reserved static IP address: 192.168.39.225
	I1028 11:54:03.966737   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Getting to WaitForSSH function...
	I1028 11:54:03.966746   95151 main.go:141] libmachine: (ha-273199-m02) Waiting for SSH to be available...
	I1028 11:54:03.969754   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970154   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:03.970188   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970315   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH client type: external
	I1028 11:54:03.970338   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa (-rw-------)
	I1028 11:54:03.970367   95151 main.go:141] libmachine: (ha-273199-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:54:03.970390   95151 main.go:141] libmachine: (ha-273199-m02) DBG | About to run SSH command:
	I1028 11:54:03.970403   95151 main.go:141] libmachine: (ha-273199-m02) DBG | exit 0
	I1028 11:54:04.099273   95151 main.go:141] libmachine: (ha-273199-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:54:04.099507   95151 main.go:141] libmachine: (ha-273199-m02) KVM machine creation complete!
	I1028 11:54:04.099831   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:04.100498   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100706   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100853   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:54:04.100870   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetState
	I1028 11:54:04.101944   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:54:04.101958   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:54:04.101966   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:54:04.101973   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.104164   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104483   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.104506   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104767   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.104942   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105094   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105250   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.105441   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.105654   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.105665   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:54:04.218542   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.218568   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:54:04.218578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.221233   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221723   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.221745   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221945   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.222117   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222361   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222486   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.222628   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.222833   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.222844   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:54:04.335872   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:54:04.335945   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:54:04.335957   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:54:04.335971   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336202   95151 buildroot.go:166] provisioning hostname "ha-273199-m02"
	I1028 11:54:04.336228   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336396   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.338798   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.339199   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339341   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.339521   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339681   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339813   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.339995   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.340196   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.340212   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m02 && echo "ha-273199-m02" | sudo tee /etc/hostname
	I1028 11:54:04.470703   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m02
	
	I1028 11:54:04.470739   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.473349   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473761   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.473785   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473981   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.474167   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474373   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474538   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.474717   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.474941   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.474960   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:54:04.595447   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.595480   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:54:04.595502   95151 buildroot.go:174] setting up certificates
	I1028 11:54:04.595513   95151 provision.go:84] configureAuth start
	I1028 11:54:04.595525   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.595847   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:04.598618   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599070   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.599093   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599227   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.601800   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602155   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.602179   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602325   95151 provision.go:143] copyHostCerts
	I1028 11:54:04.602362   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602399   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:54:04.602409   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602488   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:54:04.602621   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602649   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:54:04.602654   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602686   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:54:04.602741   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602762   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:54:04.602770   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602806   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:54:04.602864   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m02 san=[127.0.0.1 192.168.39.225 ha-273199-m02 localhost minikube]
	I1028 11:54:04.712606   95151 provision.go:177] copyRemoteCerts
	I1028 11:54:04.712663   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:54:04.712689   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.715518   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.715885   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.715912   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.716119   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.716297   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.716427   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.716589   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:04.800760   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:54:04.800829   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:54:04.821891   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:54:04.821965   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:54:04.847580   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:54:04.847678   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:54:04.870711   95151 provision.go:87] duration metric: took 275.184548ms to configureAuth
	I1028 11:54:04.870736   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:54:04.870943   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:04.871041   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.873592   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.873927   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.873960   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.874110   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.874287   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874448   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874594   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.874763   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.874973   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.874993   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:54:05.089509   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:54:05.089537   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:54:05.089548   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetURL
	I1028 11:54:05.090747   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using libvirt version 6000000
	I1028 11:54:05.092647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.092983   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.093012   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.093142   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:54:05.093158   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:54:05.093166   95151 client.go:171] duration metric: took 21.254637002s to LocalClient.Create
	I1028 11:54:05.093189   95151 start.go:167] duration metric: took 21.254710604s to libmachine.API.Create "ha-273199"
	I1028 11:54:05.093198   95151 start.go:293] postStartSetup for "ha-273199-m02" (driver="kvm2")
	I1028 11:54:05.093210   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:54:05.093234   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.093471   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:54:05.093501   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.095736   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096090   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.096118   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096277   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.096451   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.096607   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.096752   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.185260   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:54:05.189209   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:54:05.189235   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:54:05.189307   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:54:05.189410   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:54:05.189427   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:54:05.189540   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:54:05.197852   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:05.218582   95151 start.go:296] duration metric: took 125.373729ms for postStartSetup
	I1028 11:54:05.218639   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:05.219202   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.221996   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222347   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.222371   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222675   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:05.222856   95151 start.go:128] duration metric: took 21.403106118s to createHost
	I1028 11:54:05.222880   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.225160   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225457   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.225486   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225646   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.225805   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.225943   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.226048   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.226180   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:05.226400   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:05.226415   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:54:05.335802   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116445.296198293
	
	I1028 11:54:05.335827   95151 fix.go:216] guest clock: 1730116445.296198293
	I1028 11:54:05.335841   95151 fix.go:229] Guest: 2024-10-28 11:54:05.296198293 +0000 UTC Remote: 2024-10-28 11:54:05.222866703 +0000 UTC m=+67.355138355 (delta=73.33159ms)
	I1028 11:54:05.335873   95151 fix.go:200] guest clock delta is within tolerance: 73.33159ms
	I1028 11:54:05.335881   95151 start.go:83] releasing machines lock for "ha-273199-m02", held for 21.516234573s
	I1028 11:54:05.335906   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.336186   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.338574   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.338916   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.338947   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.341021   95151 out.go:177] * Found network options:
	I1028 11:54:05.342553   95151 out.go:177]   - NO_PROXY=192.168.39.208
	W1028 11:54:05.343876   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.343912   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344410   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344601   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344686   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:54:05.344725   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	W1028 11:54:05.344795   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.344870   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:54:05.344892   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.347272   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347603   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.347674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347762   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.347920   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348040   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.348054   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348067   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.348192   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.348264   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.348426   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348717   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.584423   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:54:05.589736   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:54:05.589802   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:54:05.603598   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:54:05.603618   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:54:05.603689   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:54:05.618579   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:54:05.631876   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:54:05.631943   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:54:05.646115   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:54:05.659547   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:54:05.777548   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:54:05.920510   95151 docker.go:233] disabling docker service ...
	I1028 11:54:05.920601   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:54:05.935682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:54:05.948830   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:54:06.089969   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:54:06.214668   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:54:06.227025   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:54:06.243529   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:54:06.243600   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.252888   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:54:06.252945   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.262219   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.271415   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.282109   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:54:06.291692   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.300914   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.316681   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.325900   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:54:06.334164   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:54:06.334217   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:54:06.345295   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:54:06.353414   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:06.469387   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:54:06.564464   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:54:06.564532   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:54:06.570888   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:54:06.570947   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:54:06.574424   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:54:06.609470   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:54:06.609577   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.636484   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.662978   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:54:06.664616   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:54:06.665640   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:06.668607   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.668966   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:06.669004   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.669229   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:54:06.673421   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:06.684696   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:54:06.684909   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:06.685156   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.685193   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.700107   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I1028 11:54:06.700577   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.701057   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.701079   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.701393   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.701590   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:54:06.703274   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:06.703621   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.703693   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.718078   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I1028 11:54:06.718513   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.718987   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.719005   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.719322   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.719504   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:06.719671   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.225
	I1028 11:54:06.719683   95151 certs.go:194] generating shared ca certs ...
	I1028 11:54:06.719702   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.719827   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:54:06.719882   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:54:06.719896   95151 certs.go:256] generating profile certs ...
	I1028 11:54:06.720023   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:54:06.720055   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909
	I1028 11:54:06.720075   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.254]
	I1028 11:54:06.852806   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 ...
	I1028 11:54:06.852843   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909: {Name:mkb8ff493606403d4b0e4c7b0477c06720a08f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853016   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 ...
	I1028 11:54:06.853029   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909: {Name:mkb3a86efc0165669c50f21e172de132f2ce3594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853101   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:54:06.853233   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:54:06.853356   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:54:06.853375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:54:06.853388   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:54:06.853400   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:54:06.853413   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:54:06.853426   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:54:06.853437   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:54:06.853448   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:54:06.853457   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:54:06.853505   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:54:06.853533   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:54:06.853542   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:54:06.853570   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:54:06.853618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:54:06.853648   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:54:06.853686   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:06.853716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:06.853730   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:54:06.853740   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:54:06.853773   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:06.856848   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857257   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:06.857283   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857465   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:06.857654   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:06.857769   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:06.857872   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:06.935983   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:54:06.940830   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:54:06.951512   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:54:06.955415   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:54:06.964440   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:54:06.967840   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:54:06.977901   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:54:06.982116   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:54:06.992655   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:54:06.997042   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:54:07.006289   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:54:07.009936   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:54:07.019550   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:54:07.043269   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:54:07.066117   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:54:07.088035   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:54:07.109468   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:54:07.130767   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:54:07.153514   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:54:07.175748   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:54:07.198209   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:54:07.219569   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:54:07.241366   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:54:07.262724   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:54:07.277348   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:54:07.291720   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:54:07.305550   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:54:07.319528   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:54:07.333567   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:54:07.347382   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:54:07.361182   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:54:07.366165   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:54:07.375271   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379042   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379097   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.384098   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:54:07.393089   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:54:07.402170   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405931   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405973   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.410926   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:54:07.420134   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:54:07.429223   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433088   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433140   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.437953   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:54:07.447048   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:54:07.450389   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:54:07.450445   95151 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1028 11:54:07.450537   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:54:07.450564   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:54:07.450597   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:54:07.463741   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:54:07.463803   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:54:07.463849   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.472253   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:54:07.472293   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.480970   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:54:07.480983   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:54:07.481001   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.481025   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:54:07.481066   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.484605   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:54:07.484635   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:54:08.215699   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.215797   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.220472   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:54:08.220510   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:54:08.302949   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:08.332777   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.332899   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.344780   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:54:08.344827   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:54:08.738465   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:54:08.748651   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 11:54:08.763967   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:54:08.778166   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:54:08.792673   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:54:08.796110   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:08.806415   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:08.913077   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:08.928428   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:08.928936   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:08.929001   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:08.945393   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1028 11:54:08.945922   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:08.946367   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:08.946393   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:08.946734   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:08.946931   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:08.947168   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:54:08.947340   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:54:08.947363   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:08.950295   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.950729   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:08.950759   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.951003   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:08.951292   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:08.951467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:08.951675   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:09.101707   95151 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:09.101780   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I1028 11:54:28.747369   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (19.645557844s)
	I1028 11:54:28.747419   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:54:29.256098   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m02 minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:54:29.382642   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:54:29.487190   95151 start.go:319] duration metric: took 20.540107471s to joinCluster
	I1028 11:54:29.487270   95151 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:29.487603   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:29.489950   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:54:29.491267   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:29.728819   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:29.746970   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:54:29.747328   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:54:29.747474   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:54:29.747814   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:29.747948   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:29.747961   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:29.747972   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:29.747980   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:29.757406   95151 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:54:30.248317   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.248345   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.248355   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.248359   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.255105   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:54:30.748943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.748969   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.748978   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.748984   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.752101   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:31.248899   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.248919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.248928   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.248936   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.251583   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.748337   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.748357   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.748366   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.748371   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.751333   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.751989   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:32.248221   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.248243   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.248251   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.248255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.259191   95151 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:54:32.748148   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.748179   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.748189   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.748194   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.751101   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.249110   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.249135   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.249144   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.249150   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.251769   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.748905   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.748928   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.748937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.748942   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.751961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:33.752497   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:34.248826   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.248847   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.248857   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.248863   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.251279   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:34.748949   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.748976   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.748988   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.748993   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.752114   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:35.248874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.248898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.248906   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.248911   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.251839   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:35.748886   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.748919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.748932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.748940   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.751814   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.248781   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.248808   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.248821   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.248826   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.251662   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.252253   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:36.748294   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.748319   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.748329   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.748343   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.751795   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.248778   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.248807   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.248815   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.248820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.252064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.748876   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.748901   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.748910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.748922   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.752889   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.248910   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.248935   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.248946   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.248951   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.252324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.252974   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:38.748358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.748389   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.748401   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.748410   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.751564   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.248494   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.248515   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.248524   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.248530   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.251902   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.748889   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.748912   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.748920   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.748925   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.751666   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.248637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.248663   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.248675   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.248682   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.251500   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.748631   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.748655   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.748665   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.748671   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.751537   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.752161   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:41.248409   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.248429   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.248437   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.248441   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.251178   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:41.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.748632   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.748641   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.748645   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.751235   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.248135   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.248157   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.248166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.248171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.251061   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.748875   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.748898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.748904   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.748908   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.751883   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.752428   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:43.248728   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.248749   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.248757   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.248760   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.251847   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:43.748532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.748554   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.748562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.748565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.751916   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.248210   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.248233   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.248241   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.248245   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.251111   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:44.749062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.749085   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.749092   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.749096   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.752695   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.753451   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:45.248752   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.248776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.248784   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.248787   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.251702   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:45.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.748635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.748643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.748647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.751481   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:46.248237   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.248261   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.248269   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.248272   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.251677   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:46.748175   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.748196   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.748204   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.748209   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.750924   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.249094   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.249121   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.249133   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.249139   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.251939   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.252527   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:47.748867   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.748890   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.748899   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.748903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.751778   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.248555   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.248585   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.248593   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.248597   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.251510   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.252376   95151 node_ready.go:49] node "ha-273199-m02" has status "Ready":"True"
	I1028 11:54:48.252397   95151 node_ready.go:38] duration metric: took 18.504559305s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:48.252406   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:48.252478   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:48.252487   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.252496   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.252506   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.256049   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:48.261653   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.261730   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:54:48.261739   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.261746   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.261749   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.264166   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.264759   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.264776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.264785   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.264790   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.266666   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.267238   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.267257   95151 pod_ready.go:82] duration metric: took 5.581341ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267267   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267326   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:54:48.267336   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.267346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.267353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.269749   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.270236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.270252   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.270259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.270262   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.272089   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.272472   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.272487   95151 pod_ready.go:82] duration metric: took 5.21491ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272495   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272536   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:54:48.272543   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.272550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.272553   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.274596   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.275004   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.275018   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.275024   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.275028   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.277124   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.277710   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.277730   95151 pod_ready.go:82] duration metric: took 5.229334ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277742   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277804   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:54:48.277816   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.277826   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.277830   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282085   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:48.282776   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.282794   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.282804   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282810   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.284715   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.285139   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.285156   95151 pod_ready.go:82] duration metric: took 7.407951ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.285172   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.449552   95151 request.go:632] Waited for 164.30368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449649   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.449658   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.449662   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.452644   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.649614   95151 request.go:632] Waited for 196.347979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649674   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649678   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.649686   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.649691   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.652639   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.653086   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.653104   95151 pod_ready.go:82] duration metric: took 367.924183ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.653115   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.849567   95151 request.go:632] Waited for 196.382043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849633   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849638   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.849645   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.849650   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.853050   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.049149   95151 request.go:632] Waited for 195.394568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049239   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049247   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.049258   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.049265   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.052619   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.053476   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.053498   95151 pod_ready.go:82] duration metric: took 400.377088ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.053510   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.249514   95151 request.go:632] Waited for 195.91409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249580   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.249588   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.249592   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.252347   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.449321   95151 request.go:632] Waited for 196.389294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449390   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449397   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.449406   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.449409   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.451910   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.452527   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.452552   95151 pod_ready.go:82] duration metric: took 399.03422ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.452565   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.649568   95151 request.go:632] Waited for 196.917152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.649643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.649647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.652785   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.848836   95151 request.go:632] Waited for 195.315288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848913   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848921   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.848932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.848937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.851674   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.852191   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.852210   95151 pod_ready.go:82] duration metric: took 399.639073ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.852221   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.049350   95151 request.go:632] Waited for 197.035616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049425   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049433   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.049443   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.049452   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.052771   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.248743   95151 request.go:632] Waited for 195.280445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248807   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248812   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.248827   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.248832   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.251804   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:50.252387   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.252412   95151 pod_ready.go:82] duration metric: took 400.185555ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.252424   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.449549   95151 request.go:632] Waited for 197.016421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449623   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449628   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.449639   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.449643   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.453027   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.649191   95151 request.go:632] Waited for 195.415709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649276   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649281   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.649289   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.649293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.652536   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.653266   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.653285   95151 pod_ready.go:82] duration metric: took 400.855966ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.653296   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.849376   95151 request.go:632] Waited for 196.004526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849458   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849463   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.849471   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.849475   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.852508   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.049649   95151 request.go:632] Waited for 196.358583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049709   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049715   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.049722   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.049726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.053157   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.053815   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.053835   95151 pod_ready.go:82] duration metric: took 400.533283ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.053846   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.248991   95151 request.go:632] Waited for 195.052058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249059   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249064   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.249072   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.249078   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.252735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.448724   95151 request.go:632] Waited for 195.285595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448790   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448806   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.448820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.448825   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.452721   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.453238   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.453263   95151 pod_ready.go:82] duration metric: took 399.409754ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.453278   95151 pod_ready.go:39] duration metric: took 3.200858022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:51.453306   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:54:51.453378   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:54:51.468618   95151 api_server.go:72] duration metric: took 21.98130215s to wait for apiserver process to appear ...
	I1028 11:54:51.468648   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:54:51.468673   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:54:51.472937   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:54:51.473008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:54:51.473014   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.473022   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.473030   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.473790   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:54:51.473893   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:54:51.473910   95151 api_server.go:131] duration metric: took 5.255617ms to wait for apiserver health ...
	I1028 11:54:51.473917   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:54:51.649350   95151 request.go:632] Waited for 175.3296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649418   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649424   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.649431   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.649436   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.653819   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:51.658610   95151 system_pods.go:59] 17 kube-system pods found
	I1028 11:54:51.658641   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:51.658646   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:51.658651   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:51.658654   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:51.658657   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:51.658660   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:51.658664   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:51.658669   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:51.658674   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:51.658682   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:51.658691   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:51.658696   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:51.658700   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:51.658704   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:51.658707   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:51.658710   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:51.658715   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:51.658722   95151 system_pods.go:74] duration metric: took 184.79709ms to wait for pod list to return data ...
	I1028 11:54:51.658732   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:54:51.849471   95151 request.go:632] Waited for 190.648261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849537   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.849546   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.849549   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.853472   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.853716   95151 default_sa.go:45] found service account: "default"
	I1028 11:54:51.853732   95151 default_sa.go:55] duration metric: took 194.991571ms for default service account to be created ...
	I1028 11:54:51.853742   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:54:52.049206   95151 request.go:632] Waited for 195.38768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049272   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049279   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.049287   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.049293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.055256   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:54:52.060109   95151 system_pods.go:86] 17 kube-system pods found
	I1028 11:54:52.060133   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:52.060139   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:52.060143   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:52.060147   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:52.060151   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:52.060154   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:52.060158   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:52.060162   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:52.060166   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:52.060171   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:52.060175   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:52.060178   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:52.060182   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:52.060185   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:52.060188   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:52.060192   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:52.060196   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:52.060203   95151 system_pods.go:126] duration metric: took 206.45399ms to wait for k8s-apps to be running ...
	I1028 11:54:52.060213   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:54:52.060255   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:52.076447   95151 system_svc.go:56] duration metric: took 16.226067ms WaitForService to wait for kubelet
	I1028 11:54:52.076476   95151 kubeadm.go:582] duration metric: took 22.589167548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:54:52.076506   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:54:52.248935   95151 request.go:632] Waited for 172.334931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.248998   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.249004   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.249011   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.249015   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.252625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:52.253475   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253500   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253515   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253518   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253523   95151 node_conditions.go:105] duration metric: took 177.008634ms to run NodePressure ...
	I1028 11:54:52.253537   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:54:52.253563   95151 start.go:255] writing updated cluster config ...
	I1028 11:54:52.255885   95151 out.go:201] 
	I1028 11:54:52.257299   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:52.257397   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.258847   95151 out.go:177] * Starting "ha-273199-m03" control-plane node in "ha-273199" cluster
	I1028 11:54:52.259962   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:54:52.259986   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:54:52.260095   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:54:52.260118   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:54:52.260241   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.260461   95151 start.go:360] acquireMachinesLock for ha-273199-m03: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:54:52.260509   95151 start.go:364] duration metric: took 28.17µs to acquireMachinesLock for "ha-273199-m03"
	I1028 11:54:52.260527   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:52.260626   95151 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:54:52.262400   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:54:52.262503   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:52.262543   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:52.277859   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I1028 11:54:52.278262   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:52.278738   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:52.278759   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:52.279160   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:52.279351   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:54:52.279503   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:54:52.279669   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:54:52.279701   95151 client.go:168] LocalClient.Create starting
	I1028 11:54:52.279735   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:54:52.279771   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279787   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279863   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:54:52.279888   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279905   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279929   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:54:52.279940   95151 main.go:141] libmachine: (ha-273199-m03) Calling .PreCreateCheck
	I1028 11:54:52.280085   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:54:52.280426   95151 main.go:141] libmachine: Creating machine...
	I1028 11:54:52.280439   95151 main.go:141] libmachine: (ha-273199-m03) Calling .Create
	I1028 11:54:52.280557   95151 main.go:141] libmachine: (ha-273199-m03) Creating KVM machine...
	I1028 11:54:52.281865   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing default KVM network
	I1028 11:54:52.281971   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing private KVM network mk-ha-273199
	I1028 11:54:52.282111   95151 main.go:141] libmachine: (ha-273199-m03) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.282133   95151 main.go:141] libmachine: (ha-273199-m03) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:54:52.282187   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.282077   95896 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.282257   95151 main.go:141] libmachine: (ha-273199-m03) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:54:52.559668   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.559518   95896 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa...
	I1028 11:54:52.735541   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.735336   95896 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk...
	I1028 11:54:52.735589   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing magic tar header
	I1028 11:54:52.735964   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing SSH key tar header
	I1028 11:54:52.736074   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.736016   95896 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.736145   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03
	I1028 11:54:52.736240   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 (perms=drwx------)
	I1028 11:54:52.736277   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:54:52.736290   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:54:52.736342   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:54:52.736362   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:54:52.736375   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.736394   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:54:52.736406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:54:52.736415   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:54:52.736428   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:54:52.736436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home
	I1028 11:54:52.736447   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:54:52.736462   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:52.736473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Skipping /home - not owner
	I1028 11:54:52.737378   95151 main.go:141] libmachine: (ha-273199-m03) define libvirt domain using xml: 
	I1028 11:54:52.737401   95151 main.go:141] libmachine: (ha-273199-m03) <domain type='kvm'>
	I1028 11:54:52.737412   95151 main.go:141] libmachine: (ha-273199-m03)   <name>ha-273199-m03</name>
	I1028 11:54:52.737420   95151 main.go:141] libmachine: (ha-273199-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:54:52.737428   95151 main.go:141] libmachine: (ha-273199-m03)   <vcpu>2</vcpu>
	I1028 11:54:52.737434   95151 main.go:141] libmachine: (ha-273199-m03)   <features>
	I1028 11:54:52.737442   95151 main.go:141] libmachine: (ha-273199-m03)     <acpi/>
	I1028 11:54:52.737451   95151 main.go:141] libmachine: (ha-273199-m03)     <apic/>
	I1028 11:54:52.737465   95151 main.go:141] libmachine: (ha-273199-m03)     <pae/>
	I1028 11:54:52.737475   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737485   95151 main.go:141] libmachine: (ha-273199-m03)   </features>
	I1028 11:54:52.737498   95151 main.go:141] libmachine: (ha-273199-m03)   <cpu mode='host-passthrough'>
	I1028 11:54:52.737507   95151 main.go:141] libmachine: (ha-273199-m03)   
	I1028 11:54:52.737512   95151 main.go:141] libmachine: (ha-273199-m03)   </cpu>
	I1028 11:54:52.737516   95151 main.go:141] libmachine: (ha-273199-m03)   <os>
	I1028 11:54:52.737521   95151 main.go:141] libmachine: (ha-273199-m03)     <type>hvm</type>
	I1028 11:54:52.737530   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='cdrom'/>
	I1028 11:54:52.737537   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='hd'/>
	I1028 11:54:52.737549   95151 main.go:141] libmachine: (ha-273199-m03)     <bootmenu enable='no'/>
	I1028 11:54:52.737555   95151 main.go:141] libmachine: (ha-273199-m03)   </os>
	I1028 11:54:52.737566   95151 main.go:141] libmachine: (ha-273199-m03)   <devices>
	I1028 11:54:52.737573   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='cdrom'>
	I1028 11:54:52.737605   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/boot2docker.iso'/>
	I1028 11:54:52.737626   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:54:52.737633   95151 main.go:141] libmachine: (ha-273199-m03)       <readonly/>
	I1028 11:54:52.737643   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737649   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='disk'>
	I1028 11:54:52.737657   95151 main.go:141] libmachine: (ha-273199-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:54:52.737664   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk'/>
	I1028 11:54:52.737674   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:54:52.737679   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737686   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737691   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='mk-ha-273199'/>
	I1028 11:54:52.737697   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737702   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737709   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737714   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='default'/>
	I1028 11:54:52.737721   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737725   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737736   95151 main.go:141] libmachine: (ha-273199-m03)     <serial type='pty'>
	I1028 11:54:52.737741   95151 main.go:141] libmachine: (ha-273199-m03)       <target port='0'/>
	I1028 11:54:52.737750   95151 main.go:141] libmachine: (ha-273199-m03)     </serial>
	I1028 11:54:52.737755   95151 main.go:141] libmachine: (ha-273199-m03)     <console type='pty'>
	I1028 11:54:52.737764   95151 main.go:141] libmachine: (ha-273199-m03)       <target type='serial' port='0'/>
	I1028 11:54:52.737796   95151 main.go:141] libmachine: (ha-273199-m03)     </console>
	I1028 11:54:52.737822   95151 main.go:141] libmachine: (ha-273199-m03)     <rng model='virtio'>
	I1028 11:54:52.737835   95151 main.go:141] libmachine: (ha-273199-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:54:52.737849   95151 main.go:141] libmachine: (ha-273199-m03)     </rng>
	I1028 11:54:52.737862   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737871   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737883   95151 main.go:141] libmachine: (ha-273199-m03)   </devices>
	I1028 11:54:52.737895   95151 main.go:141] libmachine: (ha-273199-m03) </domain>
	I1028 11:54:52.737906   95151 main.go:141] libmachine: (ha-273199-m03) 
	I1028 11:54:52.744674   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:8b:32:6e in network default
	I1028 11:54:52.745255   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring networks are active...
	I1028 11:54:52.745282   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:52.745947   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network default is active
	I1028 11:54:52.746212   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network mk-ha-273199 is active
	I1028 11:54:52.746662   95151 main.go:141] libmachine: (ha-273199-m03) Getting domain xml...
	I1028 11:54:52.747399   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:53.955503   95151 main.go:141] libmachine: (ha-273199-m03) Waiting to get IP...
	I1028 11:54:53.956506   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:53.956900   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:53.956929   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:53.956873   95896 retry.go:31] will retry after 206.527377ms: waiting for machine to come up
	I1028 11:54:54.165229   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.165718   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.165747   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.165667   95896 retry.go:31] will retry after 298.714532ms: waiting for machine to come up
	I1028 11:54:54.466211   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.466648   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.466677   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.466592   95896 retry.go:31] will retry after 313.294403ms: waiting for machine to come up
	I1028 11:54:54.781194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.781751   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.781781   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.781697   95896 retry.go:31] will retry after 490.276773ms: waiting for machine to come up
	I1028 11:54:55.273485   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:55.273980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:55.274010   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:55.273908   95896 retry.go:31] will retry after 747.967363ms: waiting for machine to come up
	I1028 11:54:56.023947   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.024406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.024436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.024354   95896 retry.go:31] will retry after 879.955575ms: waiting for machine to come up
	I1028 11:54:56.905338   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.905786   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.905854   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.905727   95896 retry.go:31] will retry after 900.403526ms: waiting for machine to come up
	I1028 11:54:57.807987   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:57.808508   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:57.808532   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:57.808456   95896 retry.go:31] will retry after 915.528727ms: waiting for machine to come up
	I1028 11:54:58.725704   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:58.726141   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:58.726171   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:58.726079   95896 retry.go:31] will retry after 1.589094397s: waiting for machine to come up
	I1028 11:55:00.316739   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:00.317159   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:00.317192   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:00.317103   95896 retry.go:31] will retry after 2.113867198s: waiting for machine to come up
	I1028 11:55:02.432898   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:02.433399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:02.433425   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:02.433344   95896 retry.go:31] will retry after 2.28050393s: waiting for machine to come up
	I1028 11:55:04.716742   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:04.717181   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:04.717204   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:04.717143   95896 retry.go:31] will retry after 2.249398536s: waiting for machine to come up
	I1028 11:55:06.969577   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:06.970058   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:06.970080   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:06.970033   95896 retry.go:31] will retry after 2.958136846s: waiting for machine to come up
	I1028 11:55:09.929637   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:09.930041   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:09.930070   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:09.929982   95896 retry.go:31] will retry after 4.070894756s: waiting for machine to come up
	I1028 11:55:14.002837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003301   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has current primary IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003323   95151 main.go:141] libmachine: (ha-273199-m03) Found IP for machine: 192.168.39.14
	I1028 11:55:14.003336   95151 main.go:141] libmachine: (ha-273199-m03) Reserving static IP address...
	I1028 11:55:14.003697   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "ha-273199-m03", mac: "52:54:00:46:1d:e9", ip: "192.168.39.14"} in network mk-ha-273199
	I1028 11:55:14.078161   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:14.078198   95151 main.go:141] libmachine: (ha-273199-m03) Reserved static IP address: 192.168.39.14
	I1028 11:55:14.078221   95151 main.go:141] libmachine: (ha-273199-m03) Waiting for SSH to be available...
	I1028 11:55:14.080426   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.080837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199
	I1028 11:55:14.080864   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find defined IP address of network mk-ha-273199 interface with MAC address 52:54:00:46:1d:e9
	I1028 11:55:14.080998   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:14.081020   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:14.081088   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:14.081126   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:14.081172   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:14.084960   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 11:55:14.084981   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 11:55:14.084988   95151 main.go:141] libmachine: (ha-273199-m03) DBG | command : exit 0
	I1028 11:55:14.084993   95151 main.go:141] libmachine: (ha-273199-m03) DBG | err     : exit status 255
	I1028 11:55:14.084999   95151 main.go:141] libmachine: (ha-273199-m03) DBG | output  : 
	I1028 11:55:17.085220   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:17.087584   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.087980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.088014   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.088124   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:17.088151   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:17.088186   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:17.088203   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:17.088242   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:17.219250   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:55:17.219518   95151 main.go:141] libmachine: (ha-273199-m03) KVM machine creation complete!
	I1028 11:55:17.219876   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:17.220483   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220685   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220845   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:55:17.220861   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetState
	I1028 11:55:17.222309   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:55:17.222328   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:55:17.222335   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:55:17.222343   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.224588   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.224925   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.224952   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.225089   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.225238   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225410   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225535   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.225685   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.225933   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.225948   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:55:17.334782   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.334812   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:55:17.334821   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.337833   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338269   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.338297   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338479   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.338845   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339007   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339176   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.339341   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.339539   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.339557   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:55:17.451978   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:55:17.452046   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:55:17.452059   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:55:17.452070   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452277   95151 buildroot.go:166] provisioning hostname "ha-273199-m03"
	I1028 11:55:17.452288   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452476   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.455103   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455535   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.455562   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455708   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.455867   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.455984   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.456067   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.456198   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.456408   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.456424   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m03 && echo "ha-273199-m03" | sudo tee /etc/hostname
	I1028 11:55:17.580666   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m03
	
	I1028 11:55:17.580700   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.583194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583511   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.583528   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583802   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.584016   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584194   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584336   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.584491   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.584694   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.584718   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:55:17.704448   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.704483   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:55:17.704502   95151 buildroot.go:174] setting up certificates
	I1028 11:55:17.704515   95151 provision.go:84] configureAuth start
	I1028 11:55:17.704525   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.704814   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:17.707324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707661   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.707690   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707847   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.710530   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710812   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.710834   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710987   95151 provision.go:143] copyHostCerts
	I1028 11:55:17.711016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711055   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:55:17.711067   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711144   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:55:17.711240   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711266   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:55:17.711274   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711309   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:55:17.711375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711397   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:55:17.711406   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711441   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:55:17.711512   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m03 san=[127.0.0.1 192.168.39.14 ha-273199-m03 localhost minikube]
	I1028 11:55:17.872732   95151 provision.go:177] copyRemoteCerts
	I1028 11:55:17.872791   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:55:17.872822   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.875766   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876231   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.876275   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876474   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.876674   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.876862   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.877007   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:17.961016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:55:17.961081   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:55:17.984138   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:55:17.984226   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:55:18.008131   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:55:18.008227   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:55:18.031369   95151 provision.go:87] duration metric: took 326.838997ms to configureAuth
	I1028 11:55:18.031405   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:55:18.031687   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:18.031768   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.034245   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034499   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.034512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034834   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.035030   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035212   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035366   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.035511   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.035733   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.035755   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:55:18.272929   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:55:18.272957   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:55:18.272965   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetURL
	I1028 11:55:18.274324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using libvirt version 6000000
	I1028 11:55:18.276917   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277260   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.277286   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277469   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:55:18.277495   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:55:18.277503   95151 client.go:171] duration metric: took 25.997791015s to LocalClient.Create
	I1028 11:55:18.277533   95151 start.go:167] duration metric: took 25.997864783s to libmachine.API.Create "ha-273199"
	I1028 11:55:18.277545   95151 start.go:293] postStartSetup for "ha-273199-m03" (driver="kvm2")
	I1028 11:55:18.277554   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:55:18.277570   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.277772   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:55:18.277797   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.280107   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.280500   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280672   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.280818   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.280972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.281096   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.364949   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:55:18.368679   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:55:18.368702   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:55:18.368765   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:55:18.368831   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:55:18.368841   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:55:18.368936   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:55:18.377576   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:18.398595   95151 start.go:296] duration metric: took 121.036125ms for postStartSetup
	I1028 11:55:18.398663   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:18.399226   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.401512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.401817   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.401845   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.402086   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:55:18.402271   95151 start.go:128] duration metric: took 26.1416351s to createHost
	I1028 11:55:18.402293   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.404399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404785   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.404814   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.405120   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405233   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405349   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.405479   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.405697   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.405707   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:55:18.516101   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116518.496273878
	
	I1028 11:55:18.516127   95151 fix.go:216] guest clock: 1730116518.496273878
	I1028 11:55:18.516135   95151 fix.go:229] Guest: 2024-10-28 11:55:18.496273878 +0000 UTC Remote: 2024-10-28 11:55:18.402282303 +0000 UTC m=+140.534554028 (delta=93.991575ms)
	I1028 11:55:18.516153   95151 fix.go:200] guest clock delta is within tolerance: 93.991575ms
	I1028 11:55:18.516160   95151 start.go:83] releasing machines lock for "ha-273199-m03", held for 26.255640766s
	I1028 11:55:18.516185   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.516440   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.519412   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.519815   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.519848   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.524337   95151 out.go:177] * Found network options:
	I1028 11:55:18.525743   95151 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.225
	W1028 11:55:18.527126   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.527158   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.527179   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527726   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527918   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.528047   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:55:18.528091   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	W1028 11:55:18.528116   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.528141   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.528213   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:55:18.528236   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.531068   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531433   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531460   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531507   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531598   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.531771   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.531976   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531993   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532001   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.532119   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.532160   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.532259   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.532384   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532522   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.778405   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:55:18.783655   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:55:18.783756   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:55:18.797677   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:55:18.797700   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:55:18.797761   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:55:18.814061   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:55:18.825773   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:55:18.825825   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:55:18.837935   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:55:18.849554   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:55:18.965481   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:55:19.099249   95151 docker.go:233] disabling docker service ...
	I1028 11:55:19.099323   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:55:19.113114   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:55:19.124849   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:55:19.250769   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:55:19.359879   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:55:19.373349   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:55:19.389521   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:55:19.389615   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.398854   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:55:19.398906   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.407802   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.417192   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.427164   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:55:19.436640   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.445835   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.462270   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.471609   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:55:19.480345   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:55:19.480383   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:55:19.492803   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:55:19.501227   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:19.617782   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:55:19.703544   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:55:19.703660   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:55:19.708269   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:55:19.708326   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:55:19.712086   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:55:19.749930   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:55:19.750010   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.775811   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.801952   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:55:19.803114   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:55:19.804273   95151 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.225
	I1028 11:55:19.805417   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:19.808218   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808625   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:19.808655   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808919   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:55:19.812627   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:19.824073   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:55:19.824319   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:19.824582   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.824620   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.838910   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I1028 11:55:19.839306   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.839763   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.839782   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.840142   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.840307   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:55:19.841569   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:19.841856   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.841897   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.855881   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1028 11:55:19.856375   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.856826   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.856843   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.857163   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.857327   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:19.857467   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.14
	I1028 11:55:19.857480   95151 certs.go:194] generating shared ca certs ...
	I1028 11:55:19.857496   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.857646   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:55:19.857702   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:55:19.857720   95151 certs.go:256] generating profile certs ...
	I1028 11:55:19.857827   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:55:19.857863   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7
	I1028 11:55:19.857891   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 11:55:19.946624   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 ...
	I1028 11:55:19.946653   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7: {Name:mk3236f0712e0310e6a0f8a3941b2eeadd0570c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946816   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 ...
	I1028 11:55:19.946829   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7: {Name:mka0c613afe4278aca8a4ff26ddba521c4e341b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946908   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:55:19.947042   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:55:19.947166   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:55:19.947182   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:55:19.947196   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:55:19.947208   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:55:19.947221   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:55:19.947233   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:55:19.947245   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:55:19.947256   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:55:19.967716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:55:19.967802   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:55:19.967847   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:55:19.967864   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:55:19.967899   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:55:19.967933   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:55:19.967965   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:55:19.968019   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:19.968051   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:55:19.968066   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:55:19.968076   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:19.968113   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:19.971063   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971502   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:19.971527   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971715   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:19.971902   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:19.972073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:19.972212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:20.047980   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:55:20.052462   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:55:20.063257   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:55:20.067603   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:55:20.083360   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:55:20.087209   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:55:20.096958   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:55:20.100595   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:55:20.113829   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:55:20.117648   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:55:20.126859   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:55:20.130471   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:55:20.139759   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:55:20.167843   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:55:20.191233   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:55:20.214438   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:55:20.235571   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:55:20.261436   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:55:20.285034   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:55:20.310624   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:55:20.332555   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:55:20.354176   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:55:20.374974   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:55:20.396001   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:55:20.411032   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:55:20.426186   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:55:20.441112   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:55:20.456730   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:55:20.472441   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:55:20.488012   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:55:20.502635   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:55:20.508164   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:55:20.519601   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523711   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523777   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.529016   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:55:20.538537   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:55:20.548100   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552319   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552375   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.557900   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:55:20.567792   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:55:20.577338   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581264   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581323   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.586529   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:55:20.596428   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:55:20.600115   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:55:20.600167   95151 kubeadm.go:934] updating node {m03 192.168.39.14 8443 v1.31.2 crio true true} ...
	I1028 11:55:20.600258   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:55:20.600291   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:55:20.600325   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:55:20.616989   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:55:20.617099   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:55:20.617151   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.626357   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:55:20.626409   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.634842   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:55:20.634876   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634922   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:55:20.634942   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634948   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.634853   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:55:20.635007   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.635050   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:55:20.638692   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:55:20.638722   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:55:20.663836   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.663872   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:55:20.663905   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:55:20.663970   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.699827   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:55:20.699877   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:55:21.384145   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:55:21.393997   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:55:21.409884   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:55:21.425811   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:55:21.441992   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:55:21.445803   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:21.457453   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:21.579499   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:21.596582   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:21.597031   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:21.597081   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:21.612568   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1028 11:55:21.613014   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:21.613608   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:21.613636   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:21.613983   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:21.614133   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:21.614251   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:55:21.614418   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:55:21.614445   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:21.617174   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617565   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:21.617589   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617762   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:21.617923   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:21.618054   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:21.618200   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:21.766904   95151 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:21.766967   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I1028 11:55:42.707746   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (20.940747813s)
	I1028 11:55:42.707786   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:55:43.259520   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m03 minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:55:43.364349   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:55:43.486876   95151 start.go:319] duration metric: took 21.872622243s to joinCluster
	I1028 11:55:43.486974   95151 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:43.487346   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:43.488385   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:55:43.489624   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:43.714323   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:43.797310   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:55:43.797585   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:55:43.797659   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:55:43.797894   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:55:43.797978   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:43.797989   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:43.797999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:43.798002   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:43.801478   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.298184   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.298206   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.298216   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.298222   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.301984   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.798900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.798925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.798933   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.798937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.802625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.298286   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.298308   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.298316   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.298323   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.301749   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.798575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.798599   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.798606   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.798609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.801730   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.802260   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:46.298797   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.298831   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.298843   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.298848   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.301856   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:46.798975   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.798994   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.799003   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.799009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.802334   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.298943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.298969   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.298981   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.298987   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.302012   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.799134   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.799156   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.799164   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.799170   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.802967   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.803491   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:48.298732   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.298760   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.298772   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.298778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.302148   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:48.799142   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.799170   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.799182   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.799190   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.802961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.298717   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.298741   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.298752   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.298759   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.302024   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.798693   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.798713   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.798721   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.798726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.832585   95151 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1028 11:55:49.833180   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:50.298166   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.298188   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.298197   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.298201   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.301302   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:50.798073   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.798095   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.798104   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.798108   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.803748   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:55:51.298872   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.298899   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.298910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.298913   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.301397   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:51.798388   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.798420   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.798428   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.798434   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.801659   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:52.298527   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.298549   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.298561   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.298565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.301585   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:52.302112   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:52.798187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.798212   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.798223   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.798228   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.801528   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.298514   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.298542   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.298550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.298554   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.301689   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.798539   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.798559   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.798574   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.798578   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.801491   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:54.298293   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.298317   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.298325   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.298330   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.302064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:54.302719   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:54.798749   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.798769   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.798778   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.798783   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.801841   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.298678   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.298701   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.298712   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.298716   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.302094   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.798085   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.798105   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.798113   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.798116   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.800935   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:56.298920   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.298949   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.298958   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.298962   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.302100   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.798358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.798381   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.798390   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.798394   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.801648   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.802259   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:57.298900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.298925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.298937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.298943   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.301768   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:57.798111   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.798136   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.798148   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.798154   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.802245   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:55:58.299121   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.299149   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.299162   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.299171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.302703   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:58.798590   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.798615   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.798628   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.798634   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.801208   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:59.299008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.299036   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.299047   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.299054   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.302735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:59.303420   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:59.798874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.798896   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.798903   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.798907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.802046   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.298533   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.298555   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.298562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.298567   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.301628   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.798592   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.798612   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.798619   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.798623   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.801213   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.298108   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.298133   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.298143   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.298148   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301184   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.301784   95151 node_ready.go:49] node "ha-273199-m03" has status "Ready":"True"
	I1028 11:56:01.301805   95151 node_ready.go:38] duration metric: took 17.503895303s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:56:01.301814   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:01.301887   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:01.301896   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.301903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301911   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.308580   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:01.316771   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.316873   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:56:01.316885   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.316900   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.316907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.320308   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.320987   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.321003   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.321013   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.321019   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.323787   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.324347   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.324365   95151 pod_ready.go:82] duration metric: took 7.565058ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324373   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324419   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:56:01.324427   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.324433   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.324439   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.326735   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.327335   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.327355   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.327365   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.327373   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.329530   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.330057   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.330074   95151 pod_ready.go:82] duration metric: took 5.693547ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330086   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330136   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:56:01.330146   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.330155   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.330165   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.332526   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.332999   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.333016   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.333027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.333032   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.334989   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.335422   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.335440   95151 pod_ready.go:82] duration metric: took 5.348301ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335448   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335488   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:56:01.335496   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.335502   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.335506   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.337739   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.338582   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:01.338597   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.338604   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.338609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.340562   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.341152   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.341169   95151 pod_ready.go:82] duration metric: took 5.715551ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.341177   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.498553   95151 request.go:632] Waited for 157.309109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498638   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498650   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.498660   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.498665   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.501894   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.699071   95151 request.go:632] Waited for 196.385515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699155   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699161   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.699169   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.699174   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.702324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.702894   95151 pod_ready.go:93] pod "etcd-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.702916   95151 pod_ready.go:82] duration metric: took 361.733856ms for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.702934   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.898705   95151 request.go:632] Waited for 195.691939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898957   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898985   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.898999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.899009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.902374   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.098254   95151 request.go:632] Waited for 195.287162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098328   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098335   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.098347   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.098353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.101196   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.101738   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.101763   95151 pod_ready.go:82] duration metric: took 398.820372ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.101781   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.298212   95151 request.go:632] Waited for 196.275952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298275   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298281   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.298290   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.298301   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.301860   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.499036   95151 request.go:632] Waited for 196.376254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499126   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499138   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.499147   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.499155   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.502306   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.502777   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.502797   95151 pod_ready.go:82] duration metric: took 401.004802ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.502809   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.698962   95151 request.go:632] Waited for 196.058055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699040   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699049   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.699060   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.699069   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.702304   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.898265   95151 request.go:632] Waited for 195.32967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898332   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898337   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.898346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.898349   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.901285   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.901755   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.901774   95151 pod_ready.go:82] duration metric: took 398.957477ms for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.901786   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.098215   95151 request.go:632] Waited for 196.338003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098302   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098312   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.098326   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.098336   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.101391   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.299109   95151 request.go:632] Waited for 197.052748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299198   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.299211   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.299219   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.302429   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.303124   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.303143   95151 pod_ready.go:82] duration metric: took 401.346731ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.303154   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.499186   95151 request.go:632] Waited for 195.929738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499255   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499260   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.499268   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.499283   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.502463   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.698544   95151 request.go:632] Waited for 195.349647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698622   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698627   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.698635   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.698642   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.701741   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.702403   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.702426   95151 pod_ready.go:82] duration metric: took 399.264829ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.702441   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.898913   95151 request.go:632] Waited for 196.399022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899002   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899011   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.899023   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.899029   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.902056   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.099025   95151 request.go:632] Waited for 196.30082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099105   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099116   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.099127   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.099137   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.102284   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.102800   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.102822   95151 pod_ready.go:82] duration metric: took 400.371733ms for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.102837   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.299058   95151 request.go:632] Waited for 196.137259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299139   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299144   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.299153   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.299157   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.302746   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.499079   95151 request.go:632] Waited for 195.393701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499163   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499171   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.499185   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.499195   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.503387   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:56:04.504037   95151 pod_ready.go:93] pod "kube-proxy-9g4h7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.504061   95151 pod_ready.go:82] duration metric: took 401.216048ms for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.504076   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.698976   95151 request.go:632] Waited for 194.814472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699071   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.699079   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.699084   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.702055   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:04.898609   95151 request.go:632] Waited for 195.739677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898675   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898683   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.898693   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.898700   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.901923   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.902584   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.902605   95151 pod_ready.go:82] duration metric: took 398.518978ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.902614   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.098688   95151 request.go:632] Waited for 195.978821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098754   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098759   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.098768   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.098778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.102003   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.298290   95151 request.go:632] Waited for 195.293864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298361   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298369   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.298380   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.298386   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.301816   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.302344   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.302364   95151 pod_ready.go:82] duration metric: took 399.743307ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.302375   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.498499   95151 request.go:632] Waited for 196.032121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498559   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498565   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.498572   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.498584   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.501658   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.698555   95151 request.go:632] Waited for 196.349621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698639   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.698659   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.698670   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.701856   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.702478   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.702502   95151 pod_ready.go:82] duration metric: took 400.117869ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.702516   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.898432   95151 request.go:632] Waited for 195.801686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898504   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898512   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.898523   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.898535   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.901090   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:06.099148   95151 request.go:632] Waited for 197.39166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099243   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099256   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.099266   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.099273   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.102573   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.103298   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.103317   95151 pod_ready.go:82] duration metric: took 400.794152ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.103328   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.298494   95151 request.go:632] Waited for 195.077295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298597   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298623   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.298634   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.298639   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.301973   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.499177   95151 request.go:632] Waited for 196.369372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499245   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499253   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.499263   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.499271   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.503129   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.503622   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.503653   95151 pod_ready.go:82] duration metric: took 400.317222ms for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.503666   95151 pod_ready.go:39] duration metric: took 5.2018361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:06.503683   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:56:06.503735   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:56:06.519167   95151 api_server.go:72] duration metric: took 23.032149937s to wait for apiserver process to appear ...
	I1028 11:56:06.519193   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:56:06.519218   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:56:06.524148   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:56:06.524235   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:56:06.524247   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.524259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.524269   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.525138   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:56:06.525206   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:56:06.525222   95151 api_server.go:131] duration metric: took 6.021057ms to wait for apiserver health ...
	I1028 11:56:06.525232   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:56:06.698920   95151 request.go:632] Waited for 173.589854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699014   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699026   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.699037   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.699046   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.705719   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:06.711799   95151 system_pods.go:59] 24 kube-system pods found
	I1028 11:56:06.711826   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:06.711831   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:06.711834   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:06.711837   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:06.711840   95151 system_pods.go:61] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:06.711845   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:06.711849   95151 system_pods.go:61] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:06.711858   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:06.711864   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:06.711869   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:06.711877   95151 system_pods.go:61] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:06.711884   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:06.711893   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:06.711901   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:06.711906   95151 system_pods.go:61] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:06.711911   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:06.711917   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:06.711923   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:06.711926   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:06.711932   95151 system_pods.go:61] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:06.711935   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:06.711940   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:06.711943   95151 system_pods.go:61] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:06.711947   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:06.711955   95151 system_pods.go:74] duration metric: took 186.713107ms to wait for pod list to return data ...
	I1028 11:56:06.711967   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:56:06.899177   95151 request.go:632] Waited for 187.113111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899242   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.899250   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.899255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.902353   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.902463   95151 default_sa.go:45] found service account: "default"
	I1028 11:56:06.902477   95151 default_sa.go:55] duration metric: took 190.499796ms for default service account to be created ...
	I1028 11:56:06.902489   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:56:07.098925   95151 request.go:632] Waited for 196.358925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099006   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099015   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.099027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.099034   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.104802   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:56:07.111244   95151 system_pods.go:86] 24 kube-system pods found
	I1028 11:56:07.111271   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:07.111276   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:07.111280   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:07.111284   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:07.111287   95151 system_pods.go:89] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:07.111292   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:07.111296   95151 system_pods.go:89] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:07.111301   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:07.111306   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:07.111312   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:07.111320   95151 system_pods.go:89] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:07.111326   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:07.111336   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:07.111342   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:07.111348   95151 system_pods.go:89] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:07.111354   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:07.111358   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:07.111364   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:07.111368   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:07.111374   95151 system_pods.go:89] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:07.111377   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:07.111386   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:07.111391   95151 system_pods.go:89] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:07.111394   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:07.111402   95151 system_pods.go:126] duration metric: took 208.905709ms to wait for k8s-apps to be running ...
	I1028 11:56:07.111413   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:56:07.111468   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:56:07.126987   95151 system_svc.go:56] duration metric: took 15.565787ms WaitForService to wait for kubelet
	I1028 11:56:07.127011   95151 kubeadm.go:582] duration metric: took 23.639999996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:56:07.127031   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:56:07.298754   95151 request.go:632] Waited for 171.640481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298832   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298839   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.298848   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.298857   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.302715   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:07.303776   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303797   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303807   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303810   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303814   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303817   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303821   95151 node_conditions.go:105] duration metric: took 176.784967ms to run NodePressure ...
	I1028 11:56:07.303834   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:56:07.303857   95151 start.go:255] writing updated cluster config ...
	I1028 11:56:07.304142   95151 ssh_runner.go:195] Run: rm -f paused
	I1028 11:56:07.355822   95151 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:56:07.357678   95151 out.go:177] * Done! kubectl is now configured to use "ha-273199" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.222355585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d2bb9e0-8e94-4f43-9595-fc06548543e4 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.223652405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4bd6ddf-b7f5-4bbd-b159-a4ed738e9c4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.224134675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116794224108463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4bd6ddf-b7f5-4bbd-b159-a4ed738e9c4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.224660834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5cb69d6-a139-407f-8db1-e460a627f141 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.224748621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5cb69d6-a139-407f-8db1-e460a627f141 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.225062799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5cb69d6-a139-407f-8db1-e460a627f141 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.257324598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad32c510-b975-4e08-a265-d00ff1060757 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.257413442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad32c510-b975-4e08-a265-d00ff1060757 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.258344167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f38d0d0-9cbd-4bca-ad64-740da9a0ec02 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.258799145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116794258777258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f38d0d0-9cbd-4bca-ad64-740da9a0ec02 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.259316913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1c0cab9-a733-411a-adf6-54169cecece0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.259380568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1c0cab9-a733-411a-adf6-54169cecece0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.259612624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1c0cab9-a733-411a-adf6-54169cecece0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.271851568Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e969afad-8372-4d2e-8496-ae5d45a6a100 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.272278367Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fnvwg,Uid:7e89846f-39f0-42a4-b343-0ae004376bc7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116568595326394,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:56:08.271095605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7e8f1437-aa9b-4d11-a516-f545f55e271c,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1730116437166402002,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T11:53:56.836966681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc26g,Uid:352843f5-74ea-4f39-9b5e-8a14206f5ef6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116437152514863,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74ea-4f39-9b5e-8a14206f5ef6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.837780003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rnn9,Uid:6addf18c-48d4-4b46-9695-d3c73f66dcf7,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730116437137041444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.826411741Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tr5vf,Uid:1523079e-d7eb-432d-8023-83ac95c1c853,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424827712969,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-28T11:53:43.016311556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&PodSandboxMetadata{Name:kindnet-2gldl,Uid:669d86dc-15f1-4cda-9f16-6ebfabad12ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424826468891,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:43.020213220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-273199,Uid:ec1fb61a398f082d62933fd99a5e91c8,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1730116411862344870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{kubernetes.io/config.hash: ec1fb61a398f082d62933fd99a5e91c8,kubernetes.io/config.seen: 2024-10-28T11:53:31.392312295Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-273199,Uid:2afa0eef601ae02df3405cd2d523046c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411860656774,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2afa
0eef601ae02df3405cd2d523046c,kubernetes.io/config.seen: 2024-10-28T11:53:31.392311542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-273199,Uid:de3f68a446dbf81588ffdebc94e65e05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411858786132,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de3f68a446dbf81588ffdebc94e65e05,kubernetes.io/config.seen: 2024-10-28T11:53:31.392310435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-273199,Ui
d:67aa1fe51ef7e2d6640194db4db476a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411847852262,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 67aa1fe51ef7e2d6640194db4db476a0,kubernetes.io/config.seen: 2024-10-28T11:53:31.392309218Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&PodSandboxMetadata{Name:etcd-ha-273199,Uid:af5894cc6d394a4575ef924f31654a84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411838769279,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-273199,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: af5894cc6d394a4575ef924f31654a84,kubernetes.io/config.seen: 2024-10-28T11:53:31.392305945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e969afad-8372-4d2e-8496-ae5d45a6a100 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.272876039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2bc4e7d-0e7d-4107-af3e-814db4add794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.272938493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2bc4e7d-0e7d-4107-af3e-814db4add794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.273222303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2bc4e7d-0e7d-4107-af3e-814db4add794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.295522492Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d021e34b-604f-4aae-a374-18a4e96ffbb3 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.295589518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d021e34b-604f-4aae-a374-18a4e96ffbb3 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.296514731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7d2b938-608a-4c5d-b10c-76a063ab02f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.296913451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116794296894160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7d2b938-608a-4c5d-b10c-76a063ab02f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.297346397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4bba2c2-3ee5-4b52-b255-53ea9311724c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.297413682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4bba2c2-3ee5-4b52-b255-53ea9311724c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:59:54 ha-273199 crio[663]: time="2024-10-28 11:59:54.297634081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4bba2c2-3ee5-4b52-b255-53ea9311724c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	609ad54d4add2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5aab280940ba8       busybox-7dff88458-fnvwg
	fe58f2eaad87a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   257fc926b128d       coredns-7c65d6cfc9-hc26g
	74749e3632776       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a33a6d6dc5f66       coredns-7c65d6cfc9-7rnn9
	72c80fedf6643       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   53cd5c1c15675       storage-provisioner
	e082051f544c2       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   ef059ce23254d       kindnet-2gldl
	82471ae5ddf92       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   0cbf13a852cd2       kube-proxy-tr5vf
	39409b2e85012       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   cc7ea362731d6       kube-vip-ha-273199
	8b350f0da3b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   43ab783eb9151       kube-apiserver-ha-273199
	07773cb979d8f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2541db65f40ae       kube-controller-manager-ha-273199
	6fb4822a5b791       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   737b1cd7f74b4       kube-scheduler-ha-273199
	ec2df51593c58       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   32e3db6238d43       etcd-ha-273199
	
	
	==> coredns [74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d] <==
	[INFO] 10.244.1.2:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227007s
	[INFO] 10.244.1.2:38770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002925427s
	[INFO] 10.244.1.2:48927 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147448s
	[INFO] 10.244.1.2:38077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192376s
	[INFO] 10.244.0.4:54968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.0.4:57503 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110201s
	[INFO] 10.244.0.4:34291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061267s
	[INFO] 10.244.0.4:50921 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128077s
	[INFO] 10.244.0.4:39917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062677s
	[INFO] 10.244.2.2:60183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014203s
	[INFO] 10.244.2.2:40291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692422s
	[INFO] 10.244.2.2:46423 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149349s
	[INFO] 10.244.2.2:54634 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124106s
	[INFO] 10.244.1.2:50363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142769s
	[INFO] 10.244.1.2:35968 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225253s
	[INFO] 10.244.1.2:45996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107605s
	[INFO] 10.244.1.2:49921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093269s
	[INFO] 10.244.0.4:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012322s
	[INFO] 10.244.2.2:52722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002033s
	[INFO] 10.244.2.2:57825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011394s
	[INFO] 10.244.1.2:34495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211997s
	[INFO] 10.244.1.2:44656 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288144s
	[INFO] 10.244.0.4:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021258s
	[INFO] 10.244.2.2:60661 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153264s
	[INFO] 10.244.2.2:45534 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088052s
	
	
	==> coredns [fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce] <==
	[INFO] 10.244.0.4:38250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001327706s
	[INFO] 10.244.0.4:43351 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111923s
	[INFO] 10.244.0.4:51500 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001177333s
	[INFO] 10.244.2.2:48939 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124212s
	[INFO] 10.244.2.2:50808 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000124833s
	[INFO] 10.244.1.2:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190204s
	[INFO] 10.244.0.4:58247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672481s
	[INFO] 10.244.0.4:37091 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169137s
	[INFO] 10.244.0.4:48641 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098052s
	[INFO] 10.244.2.2:54836 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104545s
	[INFO] 10.244.2.2:40126 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854336s
	[INFO] 10.244.2.2:52894 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163896s
	[INFO] 10.244.2.2:35333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230414s
	[INFO] 10.244.0.4:41974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152869s
	[INFO] 10.244.0.4:36380 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062783s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048517s
	[INFO] 10.244.2.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018024s
	[INFO] 10.244.2.2:38193 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125455s
	[INFO] 10.244.1.2:33651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271979s
	[INFO] 10.244.1.2:35705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159131s
	[INFO] 10.244.0.4:48176 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111737s
	[INFO] 10.244.0.4:38598 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127464s
	[INFO] 10.244.0.4:32940 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141046s
	[INFO] 10.244.2.2:43181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212895s
	[INFO] 10.244.2.2:43421 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090558s
	
	
	==> describe nodes <==
	Name:               ha-273199
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:53:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-273199
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c1c6593d854f8388a3b75213b790ab
	  System UUID:                c4c1c659-3d85-4f83-88a3-b75213b790ab
	  Boot ID:                    1bfb0ff9-0991-4c08-97cb-b1b218815106
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-7rnn9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-hc26g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-273199                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-2gldl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-ha-273199             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-273199    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-tr5vf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-ha-273199             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-273199                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  NodeReady                5m58s                  kubelet          Node ha-273199 status is now: NodeReady
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	
	
	Name:               ha-273199-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:54:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:57:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-273199-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d185c9b1be043df924a5dc234d517bb
	  System UUID:                2d185c9b-1be0-43df-924a-5dc234d517bb
	  Boot ID:                    707068c3-7da2-4705-9622-6b089ce29c40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8tvkk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-273199-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-ts2mp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-273199-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-273199-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-nrzn7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-273199-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-273199-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-273199-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-273199-m02 status is now: NodeNotReady
	
	
	Name:               ha-273199-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:55:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-273199-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d112805c85f46e58297ecf352114eb9
	  System UUID:                1d112805-c85f-46e5-8297-ecf352114eb9
	  Boot ID:                    07c61f8b-a2c4-4310-b7a1-41ac039bba9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g54mk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-273199-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m12s
	  kube-system                 kindnet-rz4mf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m12s
	  kube-system                 kube-apiserver-ha-273199-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-273199-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-9g4h7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-273199-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-273199-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node ha-273199-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	
	
	Name:               ha-273199-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_56_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:57:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-273199-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43b84cefa5dd4131ade4071e67ae7a87
	  System UUID:                43b84cef-a5dd-4131-ade4-071e67ae7a87
	  Boot ID:                    bfbeda91-dd05-4597-adc6-b479c1c2dd66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bx2hn       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-7pzm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-273199-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-273199-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.737052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.789015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.644647] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.122482] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.184258] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.115821] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.235503] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.601274] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.514017] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.057056] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251877] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.071885] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.801233] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.354632] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 11:54] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3] <==
	{"level":"warn","ts":"2024-10-28T11:59:54.398887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.498775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.546332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.554612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.558737Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.570169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.576684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.582970Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.588065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.591562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.596397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.598676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.601944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.608270Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.610787Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.613224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.618361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.623388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.628357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.631264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.634348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.637290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.642924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.648930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:59:54.699227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:59:54 up 6 min,  0 users,  load average: 0.40, 0.35, 0.18
	Linux ha-273199 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9] <==
	I1028 11:59:16.530799       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530030       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:26.530150       1 main.go:300] handling current node
	I1028 11:59:26.530184       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:26.530202       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:26.530461       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:26.530495       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:26.530632       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:26.530655       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:36.531055       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:36.531126       1 main.go:300] handling current node
	I1028 11:59:36.531149       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:36.531155       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:36.531406       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:36.531425       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:36.531556       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:36.531571       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.530412       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:46.530590       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.531165       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:46.531265       1 main.go:300] handling current node
	I1028 11:59:46.531299       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:46.531355       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:46.531643       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:46.531670       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56] <==
	I1028 11:53:37.479954       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:53:38.366724       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:53:38.396043       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:53:38.413224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:53:42.979540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:53:43.083644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:55:40.973661       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.973734       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.741µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:55:40.974882       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.976075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.977370       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.890629ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1028 11:56:12.749438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33980: use of closed network connection
	E1028 11:56:12.923851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33996: use of closed network connection
	E1028 11:56:13.281780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34038: use of closed network connection
	E1028 11:56:13.456851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34054: use of closed network connection
	E1028 11:56:13.625829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34076: use of closed network connection
	E1028 11:56:13.792266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34090: use of closed network connection
	E1028 11:56:13.965533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34100: use of closed network connection
	E1028 11:56:14.136211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34124: use of closed network connection
	E1028 11:56:14.414608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34162: use of closed network connection
	E1028 11:56:14.591367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34188: use of closed network connection
	E1028 11:56:14.760347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34200: use of closed network connection
	E1028 11:56:14.922486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34206: use of closed network connection
	E1028 11:56:15.092625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34220: use of closed network connection
	E1028 11:56:15.260557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34244: use of closed network connection
	
	
	==> kube-controller-manager [07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df] <==
	I1028 11:56:41.255363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.287882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.504368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.718228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m03"
	I1028 11:56:41.866442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.227080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-273199-m04"
	I1028 11:56:42.253788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.533477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199"
	I1028 11:56:43.703600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:43.733191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.386515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.495725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:51.380862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:57:01.650243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:02.239477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:12.162277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:58:02.262145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.262722       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:58:02.289111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.371759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.617397ms"
	I1028 11:58:02.371873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.712µs"
	I1028 11:58:03.751638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:07.489074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	
	
	==> kube-proxy [82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:53:45.160274       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:53:45.173814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E1028 11:53:45.173942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:53:45.205451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:53:45.205509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:53:45.205540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:53:45.207870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:53:45.208259       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:53:45.208291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:53:45.209606       1 config.go:328] "Starting node config controller"
	I1028 11:53:45.209665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:53:45.210054       1 config.go:199] "Starting service config controller"
	I1028 11:53:45.210078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:53:45.210110       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:53:45.210127       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:53:45.310570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:53:45.310626       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:53:45.310585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c] <==
	I1028 11:53:39.113228       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:55:40.277591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.278684       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 164d41fa-0fff-4f4c-8f09-011e57fc1094(kube-system/kindnet-whfj9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-whfj9"
	E1028 11:55:40.278764       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-whfj9"
	I1028 11:55:40.278832       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.294817       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.294939       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 88c92727-3ef1-4b38-9df5-771fe9917f5e(kube-system/kube-proxy-qxpt8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qxpt8"
	E1028 11:55:40.294972       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-qxpt8"
	I1028 11:55:40.295047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.307670       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.307788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4899b8e5-73ce-487e-81ca-f833a1dc900b(kube-system/kube-proxy-9g4h7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9g4h7"
	E1028 11:55:40.307822       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-9g4h7"
	I1028 11:55:40.307855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.324371       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:40.324469       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6b2fd99-538e-49be-bda5-b0e1c9edb32c(kube-system/kindnet-4bn7m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4bn7m"
	E1028 11:55:40.324505       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-4bn7m"
	I1028 11:55:40.324540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:42.324511       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:55:42.324607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33ad0e92-e29c-4e54-8593-7cffd69fd439(kube-system/kindnet-rz4mf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rz4mf"
	E1028 11:55:42.324641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-rz4mf"
	I1028 11:55:42.324700       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:56:08.295366       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	E1028 11:56:08.295536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e89846f-39f0-42a4-b343-0ae004376bc7(default/busybox-7dff88458-fnvwg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fnvwg"
	E1028 11:56:08.295580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" pod="default/busybox-7dff88458-fnvwg"
	I1028 11:56:08.295605       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	
	
	==> kubelet <==
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351743    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351767    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353760    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353814    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356841    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356866    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358886    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358944    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.361731    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.362240    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363560    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363977    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.290570    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366212    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366235    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367653    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367685    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1028 11:59:57.309079   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.316005304s)
ha_test.go:309: expected profile "ha-273199" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-273199\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-273199\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-273199\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.208\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.225\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.14\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.29\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"
MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.260333638s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m03_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-273199 node start m02 -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:52:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:52:57.905238   95151 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:57.905348   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905358   95151 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:57.905363   95151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:57.905525   95151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:57.906087   95151 out.go:352] Setting JSON to false
	I1028 11:52:57.907021   95151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5728,"bootTime":1730110650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:57.907126   95151 start.go:139] virtualization: kvm guest
	I1028 11:52:57.909586   95151 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:57.911228   95151 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:57.911224   95151 notify.go:220] Checking for updates...
	I1028 11:52:57.912881   95151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:57.914463   95151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:57.915977   95151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:57.917406   95151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:57.918858   95151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:57.920382   95151 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:57.956004   95151 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:52:57.957439   95151 start.go:297] selected driver: kvm2
	I1028 11:52:57.957454   95151 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:52:57.957467   95151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:57.958216   95151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.958309   95151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:52:57.973197   95151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:52:57.973244   95151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:52:57.973498   95151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:52:57.973536   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:52:57.973597   95151 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:52:57.973608   95151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:52:57.973673   95151 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:57.973775   95151 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:52:57.975793   95151 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 11:52:57.977410   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:52:57.977445   95151 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:52:57.977454   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:52:57.977554   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:52:57.977568   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:52:57.977888   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:52:57.977914   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json: {Name:mk29535b2b544db75ec78b7c2f3618df28a4affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:52:57.978059   95151 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:52:57.978100   95151 start.go:364] duration metric: took 24.255µs to acquireMachinesLock for "ha-273199"
	I1028 11:52:57.978122   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:52:57.978188   95151 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:52:57.980939   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:52:57.981099   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:57.981147   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:57.995094   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I1028 11:52:57.995525   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:57.996093   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:52:57.996110   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:57.996513   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:57.996734   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:52:57.996948   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:52:57.997198   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:52:57.997236   95151 client.go:168] LocalClient.Create starting
	I1028 11:52:57.997293   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:52:57.997346   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997371   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997456   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:52:57.997488   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:52:57.997509   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:52:57.997543   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:52:57.997564   95151 main.go:141] libmachine: (ha-273199) Calling .PreCreateCheck
	I1028 11:52:57.998077   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:52:57.998575   95151 main.go:141] libmachine: Creating machine...
	I1028 11:52:57.998591   95151 main.go:141] libmachine: (ha-273199) Calling .Create
	I1028 11:52:57.998762   95151 main.go:141] libmachine: (ha-273199) Creating KVM machine...
	I1028 11:52:58.000213   95151 main.go:141] libmachine: (ha-273199) DBG | found existing default KVM network
	I1028 11:52:58.000923   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.000765   95174 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045e0}
	I1028 11:52:58.000944   95151 main.go:141] libmachine: (ha-273199) DBG | created network xml: 
	I1028 11:52:58.000958   95151 main.go:141] libmachine: (ha-273199) DBG | <network>
	I1028 11:52:58.000965   95151 main.go:141] libmachine: (ha-273199) DBG |   <name>mk-ha-273199</name>
	I1028 11:52:58.000975   95151 main.go:141] libmachine: (ha-273199) DBG |   <dns enable='no'/>
	I1028 11:52:58.000981   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.000999   95151 main.go:141] libmachine: (ha-273199) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:52:58.001012   95151 main.go:141] libmachine: (ha-273199) DBG |     <dhcp>
	I1028 11:52:58.001028   95151 main.go:141] libmachine: (ha-273199) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:52:58.001044   95151 main.go:141] libmachine: (ha-273199) DBG |     </dhcp>
	I1028 11:52:58.001076   95151 main.go:141] libmachine: (ha-273199) DBG |   </ip>
	I1028 11:52:58.001096   95151 main.go:141] libmachine: (ha-273199) DBG |   
	I1028 11:52:58.001107   95151 main.go:141] libmachine: (ha-273199) DBG | </network>
	I1028 11:52:58.001116   95151 main.go:141] libmachine: (ha-273199) DBG | 
	I1028 11:52:58.006306   95151 main.go:141] libmachine: (ha-273199) DBG | trying to create private KVM network mk-ha-273199 192.168.39.0/24...
	I1028 11:52:58.068689   95151 main.go:141] libmachine: (ha-273199) DBG | private KVM network mk-ha-273199 192.168.39.0/24 created
	I1028 11:52:58.068733   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.068675   95174 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.068745   95151 main.go:141] libmachine: (ha-273199) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.068764   95151 main.go:141] libmachine: (ha-273199) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:52:58.068841   95151 main.go:141] libmachine: (ha-273199) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:52:58.350673   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.350525   95174 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa...
	I1028 11:52:58.570859   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570715   95174 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk...
	I1028 11:52:58.570893   95151 main.go:141] libmachine: (ha-273199) DBG | Writing magic tar header
	I1028 11:52:58.570902   95151 main.go:141] libmachine: (ha-273199) DBG | Writing SSH key tar header
	I1028 11:52:58.570910   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:58.570831   95174 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 ...
	I1028 11:52:58.570926   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199
	I1028 11:52:58.570998   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199 (perms=drwx------)
	I1028 11:52:58.571026   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:52:58.571056   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:52:58.571074   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:58.571082   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:52:58.571094   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:52:58.571102   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:52:58.571107   95151 main.go:141] libmachine: (ha-273199) DBG | Checking permissions on dir: /home
	I1028 11:52:58.571113   95151 main.go:141] libmachine: (ha-273199) DBG | Skipping /home - not owner
	I1028 11:52:58.571126   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:52:58.571143   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:52:58.571178   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:52:58.571193   95151 main.go:141] libmachine: (ha-273199) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:52:58.571219   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:58.572260   95151 main.go:141] libmachine: (ha-273199) define libvirt domain using xml: 
	I1028 11:52:58.572286   95151 main.go:141] libmachine: (ha-273199) <domain type='kvm'>
	I1028 11:52:58.572294   95151 main.go:141] libmachine: (ha-273199)   <name>ha-273199</name>
	I1028 11:52:58.572299   95151 main.go:141] libmachine: (ha-273199)   <memory unit='MiB'>2200</memory>
	I1028 11:52:58.572304   95151 main.go:141] libmachine: (ha-273199)   <vcpu>2</vcpu>
	I1028 11:52:58.572308   95151 main.go:141] libmachine: (ha-273199)   <features>
	I1028 11:52:58.572313   95151 main.go:141] libmachine: (ha-273199)     <acpi/>
	I1028 11:52:58.572324   95151 main.go:141] libmachine: (ha-273199)     <apic/>
	I1028 11:52:58.572330   95151 main.go:141] libmachine: (ha-273199)     <pae/>
	I1028 11:52:58.572339   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572346   95151 main.go:141] libmachine: (ha-273199)   </features>
	I1028 11:52:58.572356   95151 main.go:141] libmachine: (ha-273199)   <cpu mode='host-passthrough'>
	I1028 11:52:58.572364   95151 main.go:141] libmachine: (ha-273199)   
	I1028 11:52:58.572375   95151 main.go:141] libmachine: (ha-273199)   </cpu>
	I1028 11:52:58.572382   95151 main.go:141] libmachine: (ha-273199)   <os>
	I1028 11:52:58.572393   95151 main.go:141] libmachine: (ha-273199)     <type>hvm</type>
	I1028 11:52:58.572409   95151 main.go:141] libmachine: (ha-273199)     <boot dev='cdrom'/>
	I1028 11:52:58.572428   95151 main.go:141] libmachine: (ha-273199)     <boot dev='hd'/>
	I1028 11:52:58.572442   95151 main.go:141] libmachine: (ha-273199)     <bootmenu enable='no'/>
	I1028 11:52:58.572452   95151 main.go:141] libmachine: (ha-273199)   </os>
	I1028 11:52:58.572462   95151 main.go:141] libmachine: (ha-273199)   <devices>
	I1028 11:52:58.572470   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='cdrom'>
	I1028 11:52:58.572481   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/boot2docker.iso'/>
	I1028 11:52:58.572489   95151 main.go:141] libmachine: (ha-273199)       <target dev='hdc' bus='scsi'/>
	I1028 11:52:58.572513   95151 main.go:141] libmachine: (ha-273199)       <readonly/>
	I1028 11:52:58.572529   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572544   95151 main.go:141] libmachine: (ha-273199)     <disk type='file' device='disk'>
	I1028 11:52:58.572557   95151 main.go:141] libmachine: (ha-273199)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:52:58.572570   95151 main.go:141] libmachine: (ha-273199)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/ha-273199.rawdisk'/>
	I1028 11:52:58.572580   95151 main.go:141] libmachine: (ha-273199)       <target dev='hda' bus='virtio'/>
	I1028 11:52:58.572589   95151 main.go:141] libmachine: (ha-273199)     </disk>
	I1028 11:52:58.572599   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572625   95151 main.go:141] libmachine: (ha-273199)       <source network='mk-ha-273199'/>
	I1028 11:52:58.572647   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572659   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572669   95151 main.go:141] libmachine: (ha-273199)     <interface type='network'>
	I1028 11:52:58.572681   95151 main.go:141] libmachine: (ha-273199)       <source network='default'/>
	I1028 11:52:58.572689   95151 main.go:141] libmachine: (ha-273199)       <model type='virtio'/>
	I1028 11:52:58.572698   95151 main.go:141] libmachine: (ha-273199)     </interface>
	I1028 11:52:58.572708   95151 main.go:141] libmachine: (ha-273199)     <serial type='pty'>
	I1028 11:52:58.572719   95151 main.go:141] libmachine: (ha-273199)       <target port='0'/>
	I1028 11:52:58.572747   95151 main.go:141] libmachine: (ha-273199)     </serial>
	I1028 11:52:58.572759   95151 main.go:141] libmachine: (ha-273199)     <console type='pty'>
	I1028 11:52:58.572769   95151 main.go:141] libmachine: (ha-273199)       <target type='serial' port='0'/>
	I1028 11:52:58.572780   95151 main.go:141] libmachine: (ha-273199)     </console>
	I1028 11:52:58.572789   95151 main.go:141] libmachine: (ha-273199)     <rng model='virtio'>
	I1028 11:52:58.572801   95151 main.go:141] libmachine: (ha-273199)       <backend model='random'>/dev/random</backend>
	I1028 11:52:58.572815   95151 main.go:141] libmachine: (ha-273199)     </rng>
	I1028 11:52:58.572825   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572833   95151 main.go:141] libmachine: (ha-273199)     
	I1028 11:52:58.572844   95151 main.go:141] libmachine: (ha-273199)   </devices>
	I1028 11:52:58.572852   95151 main.go:141] libmachine: (ha-273199) </domain>
	I1028 11:52:58.572861   95151 main.go:141] libmachine: (ha-273199) 
	I1028 11:52:58.577134   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:42:ba:53 in network default
	I1028 11:52:58.577786   95151 main.go:141] libmachine: (ha-273199) Ensuring networks are active...
	I1028 11:52:58.577821   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:58.578546   95151 main.go:141] libmachine: (ha-273199) Ensuring network default is active
	I1028 11:52:58.578856   95151 main.go:141] libmachine: (ha-273199) Ensuring network mk-ha-273199 is active
	I1028 11:52:58.579358   95151 main.go:141] libmachine: (ha-273199) Getting domain xml...
	I1028 11:52:58.580118   95151 main.go:141] libmachine: (ha-273199) Creating domain...
	I1028 11:52:59.782570   95151 main.go:141] libmachine: (ha-273199) Waiting to get IP...
	I1028 11:52:59.783496   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:52:59.783901   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:52:59.783927   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:52:59.783876   95174 retry.go:31] will retry after 311.934457ms: waiting for machine to come up
	I1028 11:53:00.097445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.097916   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.097939   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.097877   95174 retry.go:31] will retry after 388.795801ms: waiting for machine to come up
	I1028 11:53:00.488689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.489130   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.489162   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.489047   95174 retry.go:31] will retry after 341.439374ms: waiting for machine to come up
	I1028 11:53:00.831825   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:00.832326   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:00.832354   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:00.832259   95174 retry.go:31] will retry after 537.545151ms: waiting for machine to come up
	I1028 11:53:01.371089   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.371572   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.371603   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.371503   95174 retry.go:31] will retry after 575.351282ms: waiting for machine to come up
	I1028 11:53:01.948343   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:01.948756   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:01.948778   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:01.948711   95174 retry.go:31] will retry after 886.467527ms: waiting for machine to come up
	I1028 11:53:02.836558   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:02.836900   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:02.836930   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:02.836853   95174 retry.go:31] will retry after 1.015980502s: waiting for machine to come up
	I1028 11:53:03.854959   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:03.855391   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:03.855437   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:03.855271   95174 retry.go:31] will retry after 1.050486499s: waiting for machine to come up
	I1028 11:53:04.907614   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:04.908201   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:04.908229   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:04.908145   95174 retry.go:31] will retry after 1.491832435s: waiting for machine to come up
	I1028 11:53:06.401910   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:06.402491   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:06.402518   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:06.402445   95174 retry.go:31] will retry after 1.441307708s: waiting for machine to come up
	I1028 11:53:07.846099   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:07.846578   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:07.846619   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:07.846526   95174 retry.go:31] will retry after 2.820165032s: waiting for machine to come up
	I1028 11:53:10.670238   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:10.670586   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:10.670616   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:10.670541   95174 retry.go:31] will retry after 2.961295833s: waiting for machine to come up
	I1028 11:53:13.633316   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:13.633782   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:13.633805   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:13.633732   95174 retry.go:31] will retry after 3.308614209s: waiting for machine to come up
	I1028 11:53:16.945522   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:16.946011   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find current IP address of domain ha-273199 in network mk-ha-273199
	I1028 11:53:16.946110   95151 main.go:141] libmachine: (ha-273199) DBG | I1028 11:53:16.946030   95174 retry.go:31] will retry after 3.990479431s: waiting for machine to come up
	I1028 11:53:20.937712   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938109   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has current primary IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:20.938130   95151 main.go:141] libmachine: (ha-273199) Found IP for machine: 192.168.39.208
	I1028 11:53:20.938142   95151 main.go:141] libmachine: (ha-273199) Reserving static IP address...
	I1028 11:53:20.938499   95151 main.go:141] libmachine: (ha-273199) DBG | unable to find host DHCP lease matching {name: "ha-273199", mac: "52:54:00:22:d4:52", ip: "192.168.39.208"} in network mk-ha-273199
	I1028 11:53:21.008969   95151 main.go:141] libmachine: (ha-273199) DBG | Getting to WaitForSSH function...
	I1028 11:53:21.008999   95151 main.go:141] libmachine: (ha-273199) Reserved static IP address: 192.168.39.208
	I1028 11:53:21.009011   95151 main.go:141] libmachine: (ha-273199) Waiting for SSH to be available...
	I1028 11:53:21.011668   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012047   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.012076   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.012164   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH client type: external
	I1028 11:53:21.012204   95151 main.go:141] libmachine: (ha-273199) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa (-rw-------)
	I1028 11:53:21.012233   95151 main.go:141] libmachine: (ha-273199) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:53:21.012252   95151 main.go:141] libmachine: (ha-273199) DBG | About to run SSH command:
	I1028 11:53:21.012267   95151 main.go:141] libmachine: (ha-273199) DBG | exit 0
	I1028 11:53:21.139407   95151 main.go:141] libmachine: (ha-273199) DBG | SSH cmd err, output: <nil>: 
	I1028 11:53:21.139608   95151 main.go:141] libmachine: (ha-273199) KVM machine creation complete!
	I1028 11:53:21.140109   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:21.140683   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.140882   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:21.141093   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:53:21.141114   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:21.142660   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:53:21.142693   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:53:21.142699   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:53:21.142707   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.144906   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145252   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.145272   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.145401   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.145570   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145700   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.145811   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.145966   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.146169   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.146182   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:53:21.258494   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.258518   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:53:21.258525   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.261399   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.261893   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.261920   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.262110   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.262319   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.262635   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.262887   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.263058   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.263068   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:53:21.376384   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:53:21.376474   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:53:21.376484   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:53:21.376495   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376737   95151 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 11:53:21.376768   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.376959   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.379689   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380146   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.380176   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.380378   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.380584   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380744   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.380879   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.381094   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.381292   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.381311   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 11:53:21.505313   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 11:53:21.505340   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.507973   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508308   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.508335   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.508498   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.508627   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508764   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.508871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.509011   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:21.509180   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:21.509205   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:53:21.627427   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:53:21.627469   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:53:21.627526   95151 buildroot.go:174] setting up certificates
	I1028 11:53:21.627546   95151 provision.go:84] configureAuth start
	I1028 11:53:21.627563   95151 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 11:53:21.627864   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:21.630491   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.630851   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.630879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.631007   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.633459   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.633874   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.633904   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.634035   95151 provision.go:143] copyHostCerts
	I1028 11:53:21.634064   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634109   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:53:21.634121   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:53:21.634183   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:53:21.634289   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634308   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:53:21.634312   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:53:21.634344   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:53:21.634423   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634439   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:53:21.634443   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:53:21.634469   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:53:21.634525   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 11:53:21.941769   95151 provision.go:177] copyRemoteCerts
	I1028 11:53:21.941844   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:53:21.941871   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:21.944301   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944588   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:21.944615   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:21.944775   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:21.945004   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:21.945172   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:21.945312   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.028802   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:53:22.028910   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:53:22.051394   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:53:22.051457   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1028 11:53:22.072047   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:53:22.072099   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:53:22.092704   95151 provision.go:87] duration metric: took 465.141947ms to configureAuth
	I1028 11:53:22.092729   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:53:22.092901   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:22.092986   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.095606   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.095961   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.095988   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.096168   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.096372   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096528   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.096655   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.096802   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.096954   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.096969   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:53:22.312757   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:53:22.312785   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:53:22.312806   95151 main.go:141] libmachine: (ha-273199) Calling .GetURL
	I1028 11:53:22.313992   95151 main.go:141] libmachine: (ha-273199) DBG | Using libvirt version 6000000
	I1028 11:53:22.316240   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316567   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.316595   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.316828   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:53:22.316850   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:53:22.316861   95151 client.go:171] duration metric: took 24.31961411s to LocalClient.Create
	I1028 11:53:22.316914   95151 start.go:167] duration metric: took 24.319696986s to libmachine.API.Create "ha-273199"
	I1028 11:53:22.316928   95151 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 11:53:22.316942   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:53:22.316962   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.317200   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:53:22.317223   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.319445   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320158   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.320178   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.320347   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.320534   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.320674   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.320778   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.406034   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:53:22.409957   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:53:22.409983   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:53:22.410056   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:53:22.410194   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:53:22.410209   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:53:22.410362   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:53:22.418934   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:22.439625   95151 start.go:296] duration metric: took 122.683745ms for postStartSetup
	I1028 11:53:22.439684   95151 main.go:141] libmachine: (ha-273199) Calling .GetConfigRaw
	I1028 11:53:22.440268   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.442702   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443017   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.443035   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.443281   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:22.443438   95151 start.go:128] duration metric: took 24.465239541s to createHost
	I1028 11:53:22.443459   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.446282   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446621   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.446650   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.446768   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.446935   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447095   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.447222   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.447404   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:53:22.447574   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 11:53:22.447589   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:53:22.559751   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116402.538168741
	
	I1028 11:53:22.559780   95151 fix.go:216] guest clock: 1730116402.538168741
	I1028 11:53:22.559788   95151 fix.go:229] Guest: 2024-10-28 11:53:22.538168741 +0000 UTC Remote: 2024-10-28 11:53:22.443448629 +0000 UTC m=+24.575720280 (delta=94.720112ms)
	I1028 11:53:22.559821   95151 fix.go:200] guest clock delta is within tolerance: 94.720112ms
	I1028 11:53:22.559826   95151 start.go:83] releasing machines lock for "ha-273199", held for 24.581718789s
	I1028 11:53:22.559851   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.560134   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:22.562796   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563147   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.563185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.563312   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563844   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.563988   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:22.564076   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:53:22.564130   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.564190   95151 ssh_runner.go:195] Run: cat /version.json
	I1028 11:53:22.564216   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:22.566705   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.566929   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567041   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567064   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567296   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567390   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:22.567416   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:22.567469   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567580   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:22.567668   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567738   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:22.567794   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.567840   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:22.567980   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:22.670647   95151 ssh_runner.go:195] Run: systemctl --version
	I1028 11:53:22.676078   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:53:22.830303   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:53:22.836224   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:53:22.836288   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:53:22.850695   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:53:22.850718   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:53:22.850775   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:53:22.865306   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:53:22.877632   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:53:22.877682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:53:22.889956   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:53:22.901677   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:53:23.007362   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:53:23.168538   95151 docker.go:233] disabling docker service ...
	I1028 11:53:23.168615   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:53:23.181374   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:53:23.192932   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:53:23.310662   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:53:23.424314   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:53:23.437058   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:53:23.453309   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:53:23.453391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.462468   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:53:23.462530   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.471391   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.480284   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.489458   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:53:23.498558   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.507348   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.522430   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:53:23.531223   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:53:23.539417   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:53:23.539455   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:53:23.551001   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:53:23.559053   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:23.661360   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:53:23.745420   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:53:23.745500   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:53:23.749645   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:53:23.749737   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:53:23.753175   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:53:23.787639   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:53:23.787732   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.812312   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:53:23.837983   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:53:23.839279   95151 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 11:53:23.841862   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842156   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:23.842185   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:23.842344   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:53:23.845848   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:23.857277   95151 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:53:23.857375   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:23.857429   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:23.885745   95151 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:53:23.885803   95151 ssh_runner.go:195] Run: which lz4
	I1028 11:53:23.889147   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:53:23.889231   95151 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:53:23.892744   95151 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:53:23.892778   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:53:24.999101   95151 crio.go:462] duration metric: took 1.10988801s to copy over tarball
	I1028 11:53:24.999192   95151 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:53:26.940236   95151 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.941006419s)
	I1028 11:53:26.940272   95151 crio.go:469] duration metric: took 1.941134954s to extract the tarball
	I1028 11:53:26.940283   95151 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:53:26.975750   95151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:53:27.015231   95151 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:53:27.015255   95151 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:53:27.015267   95151 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 11:53:27.015382   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:53:27.015466   95151 ssh_runner.go:195] Run: crio config
	I1028 11:53:27.056277   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:27.056302   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:27.056316   95151 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:53:27.056348   95151 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:53:27.056497   95151 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:53:27.056525   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:53:27.056581   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:53:27.072483   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:53:27.072593   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:53:27.072658   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:53:27.081034   95151 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:53:27.081092   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:53:27.089111   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:53:27.103592   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:53:27.118272   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:53:27.132197   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:53:27.146233   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:53:27.149485   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:53:27.160138   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:53:27.266620   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:53:27.282436   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 11:53:27.282457   95151 certs.go:194] generating shared ca certs ...
	I1028 11:53:27.282478   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.282670   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:53:27.282728   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:53:27.282741   95151 certs.go:256] generating profile certs ...
	I1028 11:53:27.282809   95151 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:53:27.282826   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt with IP's: []
	I1028 11:53:27.352056   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt ...
	I1028 11:53:27.352083   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt: {Name:mk85ba9e2d7e36c2dc386074345191c8f41db2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352257   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key ...
	I1028 11:53:27.352268   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key: {Name:mk9e399a746995b3286d90f34445304b7c10dcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.352359   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602
	I1028 11:53:27.352376   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I1028 11:53:27.701864   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 ...
	I1028 11:53:27.701927   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602: {Name:mkd8347f84237c1adf80fa2979e2851e438e06db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702124   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 ...
	I1028 11:53:27.702141   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602: {Name:mk8022b5d8b42b8f2926882e2d9f76f284ea38fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.702238   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:53:27.702318   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.99906602 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:53:27.702367   95151 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:53:27.702384   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt with IP's: []
	I1028 11:53:27.887171   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt ...
	I1028 11:53:27.887202   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt: {Name:mk8df5a7b5c3f3d68e29bbf5b564443cc1d6c268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887348   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key ...
	I1028 11:53:27.887359   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key: {Name:mk563997b82cf259c7f4075de274f929660222b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:27.887428   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:53:27.887444   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:53:27.887455   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:53:27.887469   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:53:27.887479   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:53:27.887493   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:53:27.887505   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:53:27.887517   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:53:27.887565   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:53:27.887608   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:53:27.887618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:53:27.887660   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:53:27.887680   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:53:27.887702   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:53:27.887740   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:53:27.887767   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:53:27.887780   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:27.887797   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:53:27.888376   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:53:27.912711   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:53:27.933465   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:53:27.954641   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:53:27.975959   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:53:27.996205   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:53:28.020327   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:53:28.061582   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:53:28.089945   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:53:28.110791   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:53:28.131009   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:53:28.150891   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:53:28.165153   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:53:28.170365   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:53:28.179779   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183529   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.183568   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:53:28.188718   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:53:28.197725   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:53:28.206747   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210524   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.210567   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:53:28.215456   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:53:28.224449   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:53:28.233481   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237734   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.237779   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:53:28.242623   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:53:28.251661   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:53:28.255167   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:53:28.255214   95151 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:53:28.255281   95151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:53:28.255311   95151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:53:28.288882   95151 cri.go:89] found id: ""
	I1028 11:53:28.288966   95151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:53:28.297523   95151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:53:28.306258   95151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:53:28.314625   95151 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:53:28.314641   95151 kubeadm.go:157] found existing configuration files:
	
	I1028 11:53:28.314676   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:53:28.322612   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:53:28.322668   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:53:28.330792   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:53:28.338690   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:53:28.338727   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:53:28.346773   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.354775   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:53:28.354815   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:53:28.362916   95151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:53:28.370667   95151 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:53:28.370718   95151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:53:28.378723   95151 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:53:28.563600   95151 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:53:38.972007   95151 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:53:38.972072   95151 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:53:38.972185   95151 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:53:38.972293   95151 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:53:38.972416   95151 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:53:38.972521   95151 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:53:38.974416   95151 out.go:235]   - Generating certificates and keys ...
	I1028 11:53:38.974509   95151 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:53:38.974601   95151 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:53:38.974706   95151 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:53:38.974787   95151 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:53:38.974879   95151 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:53:38.974959   95151 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:53:38.975036   95151 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:53:38.975286   95151 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975365   95151 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:53:38.975516   95151 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-273199 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 11:53:38.975611   95151 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:53:38.975722   95151 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:53:38.975797   95151 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:53:38.975877   95151 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:53:38.975944   95151 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:53:38.976014   95151 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:53:38.976064   95151 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:53:38.976141   95151 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:53:38.976202   95151 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:53:38.976272   95151 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:53:38.976334   95151 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:53:38.977999   95151 out.go:235]   - Booting up control plane ...
	I1028 11:53:38.978106   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:53:38.978178   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:53:38.978240   95151 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:53:38.978347   95151 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:53:38.978445   95151 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:53:38.978486   95151 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:53:38.978635   95151 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:53:38.978759   95151 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:53:38.978849   95151 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498504s
	I1028 11:53:38.978951   95151 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:53:38.979035   95151 kubeadm.go:310] [api-check] The API server is healthy after 5.77087672s
	I1028 11:53:38.979160   95151 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:53:38.979301   95151 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:53:38.979391   95151 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:53:38.979587   95151 kubeadm.go:310] [mark-control-plane] Marking the node ha-273199 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:53:38.979669   95151 kubeadm.go:310] [bootstrap-token] Using token: 2y659k.kh228wx7fnaw6qlw
	I1028 11:53:38.980850   95151 out.go:235]   - Configuring RBAC rules ...
	I1028 11:53:38.980953   95151 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:53:38.981063   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:53:38.981194   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:53:38.981315   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:53:38.981461   95151 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:53:38.981577   95151 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:53:38.981701   95151 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:53:38.981766   95151 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:53:38.981845   95151 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:53:38.981853   95151 kubeadm.go:310] 
	I1028 11:53:38.981937   95151 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:53:38.981950   95151 kubeadm.go:310] 
	I1028 11:53:38.982070   95151 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:53:38.982082   95151 kubeadm.go:310] 
	I1028 11:53:38.982120   95151 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:53:38.982205   95151 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:53:38.982281   95151 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:53:38.982294   95151 kubeadm.go:310] 
	I1028 11:53:38.982369   95151 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:53:38.982381   95151 kubeadm.go:310] 
	I1028 11:53:38.982451   95151 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:53:38.982463   95151 kubeadm.go:310] 
	I1028 11:53:38.982538   95151 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:53:38.982640   95151 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:53:38.982741   95151 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:53:38.982752   95151 kubeadm.go:310] 
	I1028 11:53:38.982827   95151 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:53:38.982895   95151 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:53:38.982901   95151 kubeadm.go:310] 
	I1028 11:53:38.982972   95151 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983065   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 11:53:38.983084   95151 kubeadm.go:310] 	--control-plane 
	I1028 11:53:38.983090   95151 kubeadm.go:310] 
	I1028 11:53:38.983184   95151 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:53:38.983205   95151 kubeadm.go:310] 
	I1028 11:53:38.983290   95151 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2y659k.kh228wx7fnaw6qlw \
	I1028 11:53:38.983394   95151 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 11:53:38.983404   95151 cni.go:84] Creating CNI manager for ""
	I1028 11:53:38.983412   95151 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:53:38.985768   95151 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:53:38.987136   95151 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:53:38.992611   95151 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:53:38.992633   95151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:53:39.010322   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:53:39.369131   95151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:53:39.369214   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199 minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=true
	I1028 11:53:39.369218   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:39.407348   95151 ops.go:34] apiserver oom_adj: -16
	I1028 11:53:39.512261   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.013130   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:40.512492   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.012760   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:41.512614   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.013105   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:42.513113   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.013197   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:53:43.130930   95151 kubeadm.go:1113] duration metric: took 3.761785969s to wait for elevateKubeSystemPrivileges
	I1028 11:53:43.130968   95151 kubeadm.go:394] duration metric: took 14.875757661s to StartCluster
	I1028 11:53:43.130992   95151 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.131082   95151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.131868   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:53:43.132066   95151 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.132080   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:53:43.132092   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:53:43.132110   95151 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:53:43.132191   95151 addons.go:69] Setting storage-provisioner=true in profile "ha-273199"
	I1028 11:53:43.132211   95151 addons.go:234] Setting addon storage-provisioner=true in "ha-273199"
	I1028 11:53:43.132226   95151 addons.go:69] Setting default-storageclass=true in profile "ha-273199"
	I1028 11:53:43.132243   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.132254   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.132263   95151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-273199"
	I1028 11:53:43.132656   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132704   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.132733   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.132778   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.148009   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1028 11:53:43.148148   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I1028 11:53:43.148527   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.148654   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.149031   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149050   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149159   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.149183   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.149384   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149521   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.149709   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.149923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.149968   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.152241   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:53:43.152594   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:53:43.153153   95151 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:53:43.153487   95151 addons.go:234] Setting addon default-storageclass=true in "ha-273199"
	I1028 11:53:43.153537   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:53:43.153923   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.153966   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.165112   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I1028 11:53:43.165628   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.166122   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.166140   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.166447   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.166644   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.168390   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.168673   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1028 11:53:43.169162   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.169675   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.169697   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.170033   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.170484   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.170504   95151 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:53:43.170529   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.172043   95151 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.172062   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:53:43.172076   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.174879   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175341   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.175404   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.175532   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.175676   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.175782   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.175869   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.188178   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I1028 11:53:43.188778   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.189356   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.189374   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.189736   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.189945   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:53:43.191684   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:53:43.191903   95151 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.191914   95151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:53:43.191927   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:53:43.195100   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195553   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:53:43.195576   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:53:43.195757   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:53:43.195929   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:53:43.196073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:53:43.196212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:53:43.240072   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:53:43.320825   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:53:43.357607   95151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:53:43.543521   95151 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:53:43.793100   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793126   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793180   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793204   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793468   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793490   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793520   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793527   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793535   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793541   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793554   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793572   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793581   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.793594   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.793790   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793822   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.793830   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793837   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.793798   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.793900   95151 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:53:43.793919   95151 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:53:43.794073   95151 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:53:43.794085   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.794095   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.794103   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.805561   95151 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:53:43.806144   95151 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:53:43.806158   95151 round_trippers.go:469] Request Headers:
	I1028 11:53:43.806166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:53:43.806169   95151 round_trippers.go:473]     Content-Type: application/json
	I1028 11:53:43.806171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:53:43.809243   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:53:43.809609   95151 main.go:141] libmachine: Making call to close driver server
	I1028 11:53:43.809624   95151 main.go:141] libmachine: (ha-273199) Calling .Close
	I1028 11:53:43.809925   95151 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:53:43.809942   95151 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:53:43.809968   95151 main.go:141] libmachine: (ha-273199) DBG | Closing plugin on server side
	I1028 11:53:43.812285   95151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:53:43.813517   95151 addons.go:510] duration metric: took 681.412756ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:53:43.813552   95151 start.go:246] waiting for cluster config update ...
	I1028 11:53:43.813579   95151 start.go:255] writing updated cluster config ...
	I1028 11:53:43.815032   95151 out.go:201] 
	I1028 11:53:43.816430   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:53:43.816508   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.817974   95151 out.go:177] * Starting "ha-273199-m02" control-plane node in "ha-273199" cluster
	I1028 11:53:43.819185   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:53:43.819208   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:53:43.819300   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:53:43.819313   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:53:43.819381   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:53:43.819558   95151 start.go:360] acquireMachinesLock for ha-273199-m02: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:53:43.819623   95151 start.go:364] duration metric: took 33.288µs to acquireMachinesLock for "ha-273199-m02"
	I1028 11:53:43.819661   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:53:43.819740   95151 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:53:43.821273   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:53:43.821359   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:53:43.821393   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:53:43.836503   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1028 11:53:43.837015   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:53:43.837597   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:53:43.837620   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:53:43.837996   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:53:43.838155   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:53:43.838314   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:53:43.838482   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:53:43.838517   95151 client.go:168] LocalClient.Create starting
	I1028 11:53:43.838554   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:53:43.838592   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838613   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838664   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:53:43.838684   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:53:43.838696   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:53:43.838711   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:53:43.838718   95151 main.go:141] libmachine: (ha-273199-m02) Calling .PreCreateCheck
	I1028 11:53:43.838865   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:53:43.839217   95151 main.go:141] libmachine: Creating machine...
	I1028 11:53:43.839229   95151 main.go:141] libmachine: (ha-273199-m02) Calling .Create
	I1028 11:53:43.839340   95151 main.go:141] libmachine: (ha-273199-m02) Creating KVM machine...
	I1028 11:53:43.840585   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing default KVM network
	I1028 11:53:43.840677   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found existing private KVM network mk-ha-273199
	I1028 11:53:43.840819   95151 main.go:141] libmachine: (ha-273199-m02) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:43.840837   95151 main.go:141] libmachine: (ha-273199-m02) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:53:43.840944   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:43.840827   95521 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:43.841035   95151 main.go:141] libmachine: (ha-273199-m02) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:53:44.101967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.101844   95521 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa...
	I1028 11:53:44.215652   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215521   95521 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk...
	I1028 11:53:44.215686   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing magic tar header
	I1028 11:53:44.215700   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Writing SSH key tar header
	I1028 11:53:44.215717   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:44.215655   95521 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 ...
	I1028 11:53:44.215805   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02
	I1028 11:53:44.215837   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:53:44.215846   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02 (perms=drwx------)
	I1028 11:53:44.215856   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:53:44.215863   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:53:44.215873   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:53:44.215879   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:53:44.215889   95151 main.go:141] libmachine: (ha-273199-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:53:44.215894   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:44.215903   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:53:44.215911   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:53:44.215919   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:53:44.215925   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:53:44.215930   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Checking permissions on dir: /home
	I1028 11:53:44.215935   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Skipping /home - not owner
	I1028 11:53:44.216891   95151 main.go:141] libmachine: (ha-273199-m02) define libvirt domain using xml: 
	I1028 11:53:44.216918   95151 main.go:141] libmachine: (ha-273199-m02) <domain type='kvm'>
	I1028 11:53:44.216933   95151 main.go:141] libmachine: (ha-273199-m02)   <name>ha-273199-m02</name>
	I1028 11:53:44.216941   95151 main.go:141] libmachine: (ha-273199-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:53:44.216950   95151 main.go:141] libmachine: (ha-273199-m02)   <vcpu>2</vcpu>
	I1028 11:53:44.216957   95151 main.go:141] libmachine: (ha-273199-m02)   <features>
	I1028 11:53:44.216966   95151 main.go:141] libmachine: (ha-273199-m02)     <acpi/>
	I1028 11:53:44.216976   95151 main.go:141] libmachine: (ha-273199-m02)     <apic/>
	I1028 11:53:44.216983   95151 main.go:141] libmachine: (ha-273199-m02)     <pae/>
	I1028 11:53:44.216989   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.216999   95151 main.go:141] libmachine: (ha-273199-m02)   </features>
	I1028 11:53:44.217007   95151 main.go:141] libmachine: (ha-273199-m02)   <cpu mode='host-passthrough'>
	I1028 11:53:44.217034   95151 main.go:141] libmachine: (ha-273199-m02)   
	I1028 11:53:44.217056   95151 main.go:141] libmachine: (ha-273199-m02)   </cpu>
	I1028 11:53:44.217068   95151 main.go:141] libmachine: (ha-273199-m02)   <os>
	I1028 11:53:44.217079   95151 main.go:141] libmachine: (ha-273199-m02)     <type>hvm</type>
	I1028 11:53:44.217093   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='cdrom'/>
	I1028 11:53:44.217102   95151 main.go:141] libmachine: (ha-273199-m02)     <boot dev='hd'/>
	I1028 11:53:44.217112   95151 main.go:141] libmachine: (ha-273199-m02)     <bootmenu enable='no'/>
	I1028 11:53:44.217123   95151 main.go:141] libmachine: (ha-273199-m02)   </os>
	I1028 11:53:44.217133   95151 main.go:141] libmachine: (ha-273199-m02)   <devices>
	I1028 11:53:44.217140   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='cdrom'>
	I1028 11:53:44.217157   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/boot2docker.iso'/>
	I1028 11:53:44.217172   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:53:44.217183   95151 main.go:141] libmachine: (ha-273199-m02)       <readonly/>
	I1028 11:53:44.217196   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217208   95151 main.go:141] libmachine: (ha-273199-m02)     <disk type='file' device='disk'>
	I1028 11:53:44.217219   95151 main.go:141] libmachine: (ha-273199-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:53:44.217231   95151 main.go:141] libmachine: (ha-273199-m02)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/ha-273199-m02.rawdisk'/>
	I1028 11:53:44.217243   95151 main.go:141] libmachine: (ha-273199-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:53:44.217254   95151 main.go:141] libmachine: (ha-273199-m02)     </disk>
	I1028 11:53:44.217268   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217279   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='mk-ha-273199'/>
	I1028 11:53:44.217289   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217297   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217306   95151 main.go:141] libmachine: (ha-273199-m02)     <interface type='network'>
	I1028 11:53:44.217311   95151 main.go:141] libmachine: (ha-273199-m02)       <source network='default'/>
	I1028 11:53:44.217318   95151 main.go:141] libmachine: (ha-273199-m02)       <model type='virtio'/>
	I1028 11:53:44.217327   95151 main.go:141] libmachine: (ha-273199-m02)     </interface>
	I1028 11:53:44.217340   95151 main.go:141] libmachine: (ha-273199-m02)     <serial type='pty'>
	I1028 11:53:44.217349   95151 main.go:141] libmachine: (ha-273199-m02)       <target port='0'/>
	I1028 11:53:44.217361   95151 main.go:141] libmachine: (ha-273199-m02)     </serial>
	I1028 11:53:44.217372   95151 main.go:141] libmachine: (ha-273199-m02)     <console type='pty'>
	I1028 11:53:44.217382   95151 main.go:141] libmachine: (ha-273199-m02)       <target type='serial' port='0'/>
	I1028 11:53:44.217390   95151 main.go:141] libmachine: (ha-273199-m02)     </console>
	I1028 11:53:44.217400   95151 main.go:141] libmachine: (ha-273199-m02)     <rng model='virtio'>
	I1028 11:53:44.217420   95151 main.go:141] libmachine: (ha-273199-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:53:44.217438   95151 main.go:141] libmachine: (ha-273199-m02)     </rng>
	I1028 11:53:44.217448   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217460   95151 main.go:141] libmachine: (ha-273199-m02)     
	I1028 11:53:44.217472   95151 main.go:141] libmachine: (ha-273199-m02)   </devices>
	I1028 11:53:44.217481   95151 main.go:141] libmachine: (ha-273199-m02) </domain>
	I1028 11:53:44.217489   95151 main.go:141] libmachine: (ha-273199-m02) 
	I1028 11:53:44.223932   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:5f:41:a3 in network default
	I1028 11:53:44.224544   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:44.224583   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring networks are active...
	I1028 11:53:44.225374   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network default is active
	I1028 11:53:44.225816   95151 main.go:141] libmachine: (ha-273199-m02) Ensuring network mk-ha-273199 is active
	I1028 11:53:44.226251   95151 main.go:141] libmachine: (ha-273199-m02) Getting domain xml...
	I1028 11:53:44.227023   95151 main.go:141] libmachine: (ha-273199-m02) Creating domain...
	I1028 11:53:45.439147   95151 main.go:141] libmachine: (ha-273199-m02) Waiting to get IP...
	I1028 11:53:45.440088   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.440554   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.440583   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.440482   95521 retry.go:31] will retry after 269.373557ms: waiting for machine to come up
	I1028 11:53:45.712000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:45.712443   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:45.712474   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:45.712389   95521 retry.go:31] will retry after 298.904949ms: waiting for machine to come up
	I1028 11:53:46.012797   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.013174   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.013203   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.013118   95521 retry.go:31] will retry after 446.110397ms: waiting for machine to come up
	I1028 11:53:46.460766   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.461220   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.461245   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.461168   95521 retry.go:31] will retry after 398.131323ms: waiting for machine to come up
	I1028 11:53:46.860852   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:46.861266   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:46.861297   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:46.861218   95521 retry.go:31] will retry after 575.124652ms: waiting for machine to come up
	I1028 11:53:47.437756   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:47.438185   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:47.438208   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:47.438138   95521 retry.go:31] will retry after 828.228762ms: waiting for machine to come up
	I1028 11:53:48.267451   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:48.267942   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:48.267968   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:48.267911   95521 retry.go:31] will retry after 1.143938031s: waiting for machine to come up
	I1028 11:53:49.414967   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:49.415400   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:49.415424   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:49.415361   95521 retry.go:31] will retry after 1.300605887s: waiting for machine to come up
	I1028 11:53:50.717749   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:50.718139   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:50.718173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:50.718072   95521 retry.go:31] will retry after 1.594414229s: waiting for machine to come up
	I1028 11:53:52.314529   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:52.314977   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:52.315000   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:52.314931   95521 retry.go:31] will retry after 1.837671448s: waiting for machine to come up
	I1028 11:53:54.154075   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:54.154455   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:54.154488   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:54.154386   95521 retry.go:31] will retry after 2.115441874s: waiting for machine to come up
	I1028 11:53:56.272674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:56.273183   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:56.273216   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:56.273084   95521 retry.go:31] will retry after 3.620483706s: waiting for machine to come up
	I1028 11:53:59.894801   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:53:59.895232   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find current IP address of domain ha-273199-m02 in network mk-ha-273199
	I1028 11:53:59.895260   95151 main.go:141] libmachine: (ha-273199-m02) DBG | I1028 11:53:59.895175   95521 retry.go:31] will retry after 3.99432381s: waiting for machine to come up
	I1028 11:54:03.891608   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892071   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.892098   95151 main.go:141] libmachine: (ha-273199-m02) Found IP for machine: 192.168.39.225
	I1028 11:54:03.892108   95151 main.go:141] libmachine: (ha-273199-m02) Reserving static IP address...
	I1028 11:54:03.892498   95151 main.go:141] libmachine: (ha-273199-m02) DBG | unable to find host DHCP lease matching {name: "ha-273199-m02", mac: "52:54:00:ac:c5:96", ip: "192.168.39.225"} in network mk-ha-273199
	I1028 11:54:03.966695   95151 main.go:141] libmachine: (ha-273199-m02) Reserved static IP address: 192.168.39.225
	I1028 11:54:03.966737   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Getting to WaitForSSH function...
	I1028 11:54:03.966746   95151 main.go:141] libmachine: (ha-273199-m02) Waiting for SSH to be available...
	I1028 11:54:03.969754   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970154   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:03.970188   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:03.970315   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH client type: external
	I1028 11:54:03.970338   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa (-rw-------)
	I1028 11:54:03.970367   95151 main.go:141] libmachine: (ha-273199-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:54:03.970390   95151 main.go:141] libmachine: (ha-273199-m02) DBG | About to run SSH command:
	I1028 11:54:03.970403   95151 main.go:141] libmachine: (ha-273199-m02) DBG | exit 0
	I1028 11:54:04.099273   95151 main.go:141] libmachine: (ha-273199-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:54:04.099507   95151 main.go:141] libmachine: (ha-273199-m02) KVM machine creation complete!
	I1028 11:54:04.099831   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:04.100498   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100706   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:04.100853   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:54:04.100870   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetState
	I1028 11:54:04.101944   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:54:04.101958   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:54:04.101966   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:54:04.101973   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.104164   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104483   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.104506   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.104767   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.104942   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105094   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.105250   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.105441   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.105654   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.105665   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:54:04.218542   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.218568   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:54:04.218578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.221233   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221723   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.221745   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.221945   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.222117   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222361   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.222486   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.222628   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.222833   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.222844   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:54:04.335872   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:54:04.335945   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:54:04.335957   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:54:04.335971   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336202   95151 buildroot.go:166] provisioning hostname "ha-273199-m02"
	I1028 11:54:04.336228   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.336396   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.338798   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339173   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.339199   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.339341   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.339521   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339681   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.339813   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.339995   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.340196   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.340212   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m02 && echo "ha-273199-m02" | sudo tee /etc/hostname
	I1028 11:54:04.470703   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m02
	
	I1028 11:54:04.470739   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.473349   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473761   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.473785   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.473981   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.474167   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474373   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.474538   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.474717   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.474941   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.474960   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:54:04.595447   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:54:04.595480   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:54:04.595502   95151 buildroot.go:174] setting up certificates
	I1028 11:54:04.595513   95151 provision.go:84] configureAuth start
	I1028 11:54:04.595525   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetMachineName
	I1028 11:54:04.595847   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:04.598618   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599070   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.599093   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.599227   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.601800   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602155   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.602179   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.602325   95151 provision.go:143] copyHostCerts
	I1028 11:54:04.602362   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602399   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:54:04.602409   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:54:04.602488   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:54:04.602621   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602649   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:54:04.602654   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:54:04.602686   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:54:04.602741   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602762   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:54:04.602770   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:54:04.602806   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:54:04.602864   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m02 san=[127.0.0.1 192.168.39.225 ha-273199-m02 localhost minikube]
	I1028 11:54:04.712606   95151 provision.go:177] copyRemoteCerts
	I1028 11:54:04.712663   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:54:04.712689   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.715518   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.715885   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.715912   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.716119   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.716297   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.716427   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.716589   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:04.800760   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:54:04.800829   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:54:04.821891   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:54:04.821965   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:54:04.847580   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:54:04.847678   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:54:04.870711   95151 provision.go:87] duration metric: took 275.184548ms to configureAuth
	I1028 11:54:04.870736   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:54:04.870943   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:04.871041   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:04.873592   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.873927   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:04.873960   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:04.874110   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:04.874287   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874448   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:04.874594   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:04.874763   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:04.874973   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:04.874993   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:54:05.089509   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:54:05.089537   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:54:05.089548   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetURL
	I1028 11:54:05.090747   95151 main.go:141] libmachine: (ha-273199-m02) DBG | Using libvirt version 6000000
	I1028 11:54:05.092647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.092983   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.093012   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.093142   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:54:05.093158   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:54:05.093166   95151 client.go:171] duration metric: took 21.254637002s to LocalClient.Create
	I1028 11:54:05.093189   95151 start.go:167] duration metric: took 21.254710604s to libmachine.API.Create "ha-273199"
	I1028 11:54:05.093198   95151 start.go:293] postStartSetup for "ha-273199-m02" (driver="kvm2")
	I1028 11:54:05.093210   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:54:05.093234   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.093471   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:54:05.093501   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.095736   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096090   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.096118   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.096277   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.096451   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.096607   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.096752   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.185260   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:54:05.189209   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:54:05.189235   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:54:05.189307   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:54:05.189410   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:54:05.189427   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:54:05.189540   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:54:05.197852   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:05.218582   95151 start.go:296] duration metric: took 125.373729ms for postStartSetup
	I1028 11:54:05.218639   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetConfigRaw
	I1028 11:54:05.219202   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.221996   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222347   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.222371   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.222675   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:05.222856   95151 start.go:128] duration metric: took 21.403106118s to createHost
	I1028 11:54:05.222880   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.225160   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225457   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.225486   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.225646   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.225805   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.225943   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.226048   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.226180   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:54:05.226400   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1028 11:54:05.226415   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:54:05.335802   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116445.296198293
	
	I1028 11:54:05.335827   95151 fix.go:216] guest clock: 1730116445.296198293
	I1028 11:54:05.335841   95151 fix.go:229] Guest: 2024-10-28 11:54:05.296198293 +0000 UTC Remote: 2024-10-28 11:54:05.222866703 +0000 UTC m=+67.355138355 (delta=73.33159ms)
	I1028 11:54:05.335873   95151 fix.go:200] guest clock delta is within tolerance: 73.33159ms
	I1028 11:54:05.335881   95151 start.go:83] releasing machines lock for "ha-273199-m02", held for 21.516234573s
	I1028 11:54:05.335906   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.336186   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:05.338574   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.338916   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.338947   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.341021   95151 out.go:177] * Found network options:
	I1028 11:54:05.342553   95151 out.go:177]   - NO_PROXY=192.168.39.208
	W1028 11:54:05.343876   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.343912   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344410   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344601   95151 main.go:141] libmachine: (ha-273199-m02) Calling .DriverName
	I1028 11:54:05.344686   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:54:05.344725   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	W1028 11:54:05.344795   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:54:05.344870   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:54:05.344892   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHHostname
	I1028 11:54:05.347272   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347603   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347647   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.347674   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.347762   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.347920   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348040   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:05.348054   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348067   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:05.348192   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.348264   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHPort
	I1028 11:54:05.348426   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHKeyPath
	I1028 11:54:05.348578   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetSSHUsername
	I1028 11:54:05.348717   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m02/id_rsa Username:docker}
	I1028 11:54:05.584423   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:54:05.589736   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:54:05.589802   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:54:05.603598   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:54:05.603618   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:54:05.603689   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:54:05.618579   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:54:05.631876   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:54:05.631943   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:54:05.646115   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:54:05.659547   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:54:05.777548   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:54:05.920510   95151 docker.go:233] disabling docker service ...
	I1028 11:54:05.920601   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:54:05.935682   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:54:05.948830   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:54:06.089969   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:54:06.214668   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:54:06.227025   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:54:06.243529   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:54:06.243600   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.252888   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:54:06.252945   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.262219   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.271415   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.282109   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:54:06.291692   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.300914   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.316681   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:54:06.325900   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:54:06.334164   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:54:06.334217   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:54:06.345295   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:54:06.353414   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:06.469387   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:54:06.564464   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:54:06.564532   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:54:06.570888   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:54:06.570947   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:54:06.574424   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:54:06.609470   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:54:06.609577   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.636484   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:54:06.662978   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:54:06.664616   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:54:06.665640   95151 main.go:141] libmachine: (ha-273199-m02) Calling .GetIP
	I1028 11:54:06.668607   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.668966   95151 main.go:141] libmachine: (ha-273199-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c5:96", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:57 +0000 UTC Type:0 Mac:52:54:00:ac:c5:96 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-273199-m02 Clientid:01:52:54:00:ac:c5:96}
	I1028 11:54:06.669004   95151 main.go:141] libmachine: (ha-273199-m02) DBG | domain ha-273199-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:ac:c5:96 in network mk-ha-273199
	I1028 11:54:06.669229   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:54:06.673421   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:06.684696   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:54:06.684909   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:06.685156   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.685193   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.700107   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I1028 11:54:06.700577   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.701057   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.701079   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.701393   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.701590   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:54:06.703274   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:06.703621   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:06.703693   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:06.718078   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I1028 11:54:06.718513   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:06.718987   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:06.719005   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:06.719322   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:06.719504   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:06.719671   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.225
	I1028 11:54:06.719683   95151 certs.go:194] generating shared ca certs ...
	I1028 11:54:06.719702   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.719827   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:54:06.719882   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:54:06.719896   95151 certs.go:256] generating profile certs ...
	I1028 11:54:06.720023   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:54:06.720055   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909
	I1028 11:54:06.720075   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.254]
	I1028 11:54:06.852806   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 ...
	I1028 11:54:06.852843   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909: {Name:mkb8ff493606403d4b0e4c7b0477c06720a08f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853016   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 ...
	I1028 11:54:06.853029   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909: {Name:mkb3a86efc0165669c50f21e172de132f2ce3594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:54:06.853101   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:54:06.853233   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.eab99909 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:54:06.853356   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:54:06.853375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:54:06.853388   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:54:06.853400   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:54:06.853413   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:54:06.853426   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:54:06.853437   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:54:06.853448   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:54:06.853457   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:54:06.853505   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:54:06.853533   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:54:06.853542   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:54:06.853570   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:54:06.853618   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:54:06.853648   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:54:06.853686   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:54:06.853716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:06.853730   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:54:06.853740   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:54:06.853773   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:06.856848   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857257   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:06.857283   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:06.857465   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:06.857654   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:06.857769   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:06.857872   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:06.935983   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:54:06.940830   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:54:06.951512   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:54:06.955415   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:54:06.964440   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:54:06.967840   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:54:06.977901   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:54:06.982116   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:54:06.992655   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:54:06.997042   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:54:07.006289   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:54:07.009936   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:54:07.019550   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:54:07.043269   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:54:07.066117   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:54:07.088035   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:54:07.109468   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:54:07.130767   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:54:07.153514   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:54:07.175748   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:54:07.198209   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:54:07.219569   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:54:07.241366   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:54:07.262724   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:54:07.277348   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:54:07.291720   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:54:07.305550   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:54:07.319528   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:54:07.333567   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:54:07.347382   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:54:07.361182   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:54:07.366165   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:54:07.375271   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379042   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.379097   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:54:07.384098   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:54:07.393089   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:54:07.402170   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405931   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.405973   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:54:07.410926   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:54:07.420134   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:54:07.429223   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433088   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.433140   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:54:07.437953   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:54:07.447048   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:54:07.450389   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:54:07.450445   95151 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1028 11:54:07.450537   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:54:07.450564   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:54:07.450597   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:54:07.463741   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:54:07.463803   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:54:07.463849   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.472253   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:54:07.472293   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:54:07.480970   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:54:07.480983   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:54:07.481001   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.481025   95151 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:54:07.481066   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:54:07.484605   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:54:07.484635   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:54:08.215699   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.215797   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:54:08.220472   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:54:08.220510   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:54:08.302949   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:08.332777   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.332899   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:54:08.344780   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:54:08.344827   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:54:08.738465   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:54:08.748651   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 11:54:08.763967   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:54:08.778166   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:54:08.792673   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:54:08.796110   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:54:08.806415   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:08.913077   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:08.928428   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:54:08.928936   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:08.929001   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:08.945393   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1028 11:54:08.945922   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:08.946367   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:08.946393   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:08.946734   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:08.946931   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:54:08.947168   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:54:08.947340   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:54:08.947363   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:54:08.950295   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.950729   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:54:08.950759   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:54:08.951003   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:54:08.951292   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:54:08.951467   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:54:08.951675   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:54:09.101707   95151 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:09.101780   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I1028 11:54:28.747369   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 73w2vd.c8iekbscs17hpxyn --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (19.645557844s)
	I1028 11:54:28.747419   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:54:29.256098   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m02 minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:54:29.382642   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:54:29.487190   95151 start.go:319] duration metric: took 20.540107471s to joinCluster
	I1028 11:54:29.487270   95151 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:29.487603   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:29.489950   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:54:29.491267   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:54:29.728819   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:54:29.746970   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:54:29.747328   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:54:29.747474   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:54:29.747814   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:29.747948   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:29.747961   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:29.747972   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:29.747980   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:29.757406   95151 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:54:30.248317   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.248345   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.248355   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.248359   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.255105   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:54:30.748943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:30.748969   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:30.748978   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:30.748984   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:30.752101   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:31.248899   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.248919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.248928   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.248936   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.251583   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.748337   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:31.748357   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:31.748366   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:31.748371   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:31.751333   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:31.751989   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:32.248221   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.248243   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.248251   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.248255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.259191   95151 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1028 11:54:32.748148   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:32.748179   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:32.748189   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:32.748194   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:32.751101   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.249110   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.249135   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.249144   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.249150   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.251769   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:33.748905   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:33.748928   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:33.748937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:33.748942   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:33.751961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:33.752497   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:34.248826   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.248847   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.248857   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.248863   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.251279   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:34.748949   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:34.748976   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:34.748988   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:34.748993   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:34.752114   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:35.248874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.248898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.248906   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.248911   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.251839   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:35.748886   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:35.748919   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:35.748932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:35.748940   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:35.751814   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.248781   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.248808   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.248821   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.248826   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.251662   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:36.252253   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:36.748294   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:36.748319   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:36.748329   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:36.748343   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:36.751795   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.248778   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.248807   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.248815   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.248820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.252064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:37.748876   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:37.748901   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:37.748910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:37.748922   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:37.752889   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.248910   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.248935   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.248946   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.248951   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.252324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:38.252974   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:38.748358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:38.748389   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:38.748401   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:38.748410   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:38.751564   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.248494   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.248515   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.248524   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.248530   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.251902   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:39.748889   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:39.748912   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:39.748920   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:39.748925   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:39.751666   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.248637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.248663   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.248675   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.248682   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.251500   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.748631   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:40.748655   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:40.748665   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:40.748671   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:40.751537   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:40.752161   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:41.248409   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.248429   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.248437   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.248441   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.251178   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:41.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:41.748632   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:41.748641   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:41.748645   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:41.751235   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.248135   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.248157   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.248166   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.248171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.251061   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.748875   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:42.748898   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:42.748904   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:42.748908   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:42.751883   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:42.752428   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:43.248728   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.248749   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.248757   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.248760   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.251847   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:43.748532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:43.748554   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:43.748562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:43.748565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:43.751916   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.248210   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.248233   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.248241   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.248245   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.251111   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:44.749062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:44.749085   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:44.749092   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:44.749096   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:44.752695   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:44.753451   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:45.248752   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.248776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.248784   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.248787   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.251702   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:45.748613   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:45.748635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:45.748643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:45.748647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:45.751481   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:46.248237   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.248261   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.248269   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.248272   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.251677   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:46.748175   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:46.748196   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:46.748204   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:46.748209   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:46.750924   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.249094   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.249121   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.249133   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.249139   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.251939   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:47.252527   95151 node_ready.go:53] node "ha-273199-m02" has status "Ready":"False"
	I1028 11:54:47.748867   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:47.748890   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:47.748899   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:47.748903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:47.751778   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.248555   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.248585   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.248593   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.248597   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.251510   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.252376   95151 node_ready.go:49] node "ha-273199-m02" has status "Ready":"True"
	I1028 11:54:48.252397   95151 node_ready.go:38] duration metric: took 18.504559305s for node "ha-273199-m02" to be "Ready" ...
	I1028 11:54:48.252406   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:48.252478   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:48.252487   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.252496   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.252506   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.256049   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:48.261653   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.261730   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:54:48.261739   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.261746   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.261749   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.264166   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.264759   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.264776   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.264785   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.264790   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.266666   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.267238   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.267257   95151 pod_ready.go:82] duration metric: took 5.581341ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267267   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.267326   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:54:48.267336   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.267346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.267353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.269749   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.270236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.270252   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.270259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.270262   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.272089   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.272472   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.272487   95151 pod_ready.go:82] duration metric: took 5.21491ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272495   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.272536   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:54:48.272543   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.272550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.272553   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.274596   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.275004   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.275018   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.275024   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.275028   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.277124   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.277710   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.277730   95151 pod_ready.go:82] duration metric: took 5.229334ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277742   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.277804   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:54:48.277816   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.277826   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.277830   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282085   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:48.282776   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:48.282794   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.282804   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.282810   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.284715   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:54:48.285139   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.285156   95151 pod_ready.go:82] duration metric: took 7.407951ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.285172   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.449552   95151 request.go:632] Waited for 164.30368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449637   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:54:48.449649   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.449658   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.449662   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.452644   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.649614   95151 request.go:632] Waited for 196.347979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649674   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:48.649678   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.649686   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.649691   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.652639   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:48.653086   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:48.653104   95151 pod_ready.go:82] duration metric: took 367.924183ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.653115   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:48.849567   95151 request.go:632] Waited for 196.382043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849633   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:54:48.849638   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:48.849645   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:48.849650   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:48.853050   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.049149   95151 request.go:632] Waited for 195.394568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049239   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.049247   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.049258   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.049265   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.052619   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.053476   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.053498   95151 pod_ready.go:82] duration metric: took 400.377088ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.053510   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.249514   95151 request.go:632] Waited for 195.91409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:54:49.249580   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.249588   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.249592   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.252347   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.449321   95151 request.go:632] Waited for 196.389294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449390   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:49.449397   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.449406   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.449409   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.451910   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.452527   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.452552   95151 pod_ready.go:82] duration metric: took 399.03422ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.452565   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.649568   95151 request.go:632] Waited for 196.917152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:54:49.649635   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.649643   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.649647   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.652785   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:49.848836   95151 request.go:632] Waited for 195.315288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848913   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:49.848921   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:49.848932   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:49.848937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:49.851674   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:49.852191   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:49.852210   95151 pod_ready.go:82] duration metric: took 399.639073ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:49.852221   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.049350   95151 request.go:632] Waited for 197.035616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049425   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:54:50.049433   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.049443   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.049452   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.052771   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.248743   95151 request.go:632] Waited for 195.280445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248807   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:50.248812   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.248827   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.248832   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.251804   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:54:50.252387   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.252412   95151 pod_ready.go:82] duration metric: took 400.185555ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.252424   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.449549   95151 request.go:632] Waited for 197.016421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449623   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:54:50.449628   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.449639   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.449643   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.453027   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.649191   95151 request.go:632] Waited for 195.415709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649276   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:50.649281   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.649289   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.649293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.652536   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:50.653266   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:50.653285   95151 pod_ready.go:82] duration metric: took 400.855966ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.653296   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:50.849376   95151 request.go:632] Waited for 196.004526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849458   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:54:50.849463   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:50.849471   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:50.849475   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:50.852508   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.049649   95151 request.go:632] Waited for 196.358583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049709   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:54:51.049715   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.049722   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.049726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.053157   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.053815   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.053835   95151 pod_ready.go:82] duration metric: took 400.533283ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.053846   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.248991   95151 request.go:632] Waited for 195.052058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249059   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:54:51.249064   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.249072   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.249078   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.252735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.448724   95151 request.go:632] Waited for 195.285595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448790   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:54:51.448806   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.448820   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.448825   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.452721   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.453238   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:54:51.453263   95151 pod_ready.go:82] duration metric: took 399.409754ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:54:51.453278   95151 pod_ready.go:39] duration metric: took 3.200858022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:54:51.453306   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:54:51.453378   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:54:51.468618   95151 api_server.go:72] duration metric: took 21.98130215s to wait for apiserver process to appear ...
	I1028 11:54:51.468648   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:54:51.468673   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:54:51.472937   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:54:51.473008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:54:51.473014   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.473022   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.473030   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.473790   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:54:51.473893   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:54:51.473910   95151 api_server.go:131] duration metric: took 5.255617ms to wait for apiserver health ...
	I1028 11:54:51.473917   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:54:51.649350   95151 request.go:632] Waited for 175.3296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649418   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:51.649424   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.649431   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.649436   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.653819   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:54:51.658610   95151 system_pods.go:59] 17 kube-system pods found
	I1028 11:54:51.658641   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:51.658646   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:51.658651   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:51.658654   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:51.658657   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:51.658660   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:51.658664   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:51.658669   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:51.658674   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:51.658682   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:51.658691   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:51.658696   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:51.658700   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:51.658704   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:51.658707   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:51.658710   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:51.658715   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:51.658722   95151 system_pods.go:74] duration metric: took 184.79709ms to wait for pod list to return data ...
	I1028 11:54:51.658732   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:54:51.849471   95151 request.go:632] Waited for 190.648261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849532   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:54:51.849537   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:51.849546   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:51.849549   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:51.853472   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:51.853716   95151 default_sa.go:45] found service account: "default"
	I1028 11:54:51.853732   95151 default_sa.go:55] duration metric: took 194.991571ms for default service account to be created ...
	I1028 11:54:51.853742   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:54:52.049206   95151 request.go:632] Waited for 195.38768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049272   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:54:52.049279   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.049287   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.049293   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.055256   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:54:52.060109   95151 system_pods.go:86] 17 kube-system pods found
	I1028 11:54:52.060133   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:54:52.060139   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:54:52.060143   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:54:52.060147   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:54:52.060151   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:54:52.060154   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:54:52.060158   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:54:52.060162   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:54:52.060166   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:54:52.060171   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:54:52.060175   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:54:52.060178   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:54:52.060182   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:54:52.060185   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:54:52.060188   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:54:52.060192   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:54:52.060196   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:54:52.060203   95151 system_pods.go:126] duration metric: took 206.45399ms to wait for k8s-apps to be running ...
	I1028 11:54:52.060213   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:54:52.060255   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:54:52.076447   95151 system_svc.go:56] duration metric: took 16.226067ms WaitForService to wait for kubelet
	I1028 11:54:52.076476   95151 kubeadm.go:582] duration metric: took 22.589167548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:54:52.076506   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:54:52.248935   95151 request.go:632] Waited for 172.334931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.248998   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:54:52.249004   95151 round_trippers.go:469] Request Headers:
	I1028 11:54:52.249011   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:54:52.249015   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:54:52.252625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:54:52.253475   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253500   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253515   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:54:52.253518   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:54:52.253523   95151 node_conditions.go:105] duration metric: took 177.008634ms to run NodePressure ...
	I1028 11:54:52.253537   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:54:52.253563   95151 start.go:255] writing updated cluster config ...
	I1028 11:54:52.255885   95151 out.go:201] 
	I1028 11:54:52.257299   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:54:52.257397   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.258847   95151 out.go:177] * Starting "ha-273199-m03" control-plane node in "ha-273199" cluster
	I1028 11:54:52.259962   95151 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:54:52.259986   95151 cache.go:56] Caching tarball of preloaded images
	I1028 11:54:52.260095   95151 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:54:52.260118   95151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:54:52.260241   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:54:52.260461   95151 start.go:360] acquireMachinesLock for ha-273199-m03: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:54:52.260509   95151 start.go:364] duration metric: took 28.17µs to acquireMachinesLock for "ha-273199-m03"
	I1028 11:54:52.260527   95151 start.go:93] Provisioning new machine with config: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:54:52.260626   95151 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:54:52.262400   95151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:54:52.262503   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:54:52.262543   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:54:52.277859   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I1028 11:54:52.278262   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:54:52.278738   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:54:52.278759   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:54:52.279160   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:54:52.279351   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:54:52.279503   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:54:52.279669   95151 start.go:159] libmachine.API.Create for "ha-273199" (driver="kvm2")
	I1028 11:54:52.279701   95151 client.go:168] LocalClient.Create starting
	I1028 11:54:52.279735   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 11:54:52.279771   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279787   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279863   95151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 11:54:52.279888   95151 main.go:141] libmachine: Decoding PEM data...
	I1028 11:54:52.279905   95151 main.go:141] libmachine: Parsing certificate...
	I1028 11:54:52.279929   95151 main.go:141] libmachine: Running pre-create checks...
	I1028 11:54:52.279940   95151 main.go:141] libmachine: (ha-273199-m03) Calling .PreCreateCheck
	I1028 11:54:52.280085   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:54:52.280426   95151 main.go:141] libmachine: Creating machine...
	I1028 11:54:52.280439   95151 main.go:141] libmachine: (ha-273199-m03) Calling .Create
	I1028 11:54:52.280557   95151 main.go:141] libmachine: (ha-273199-m03) Creating KVM machine...
	I1028 11:54:52.281865   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing default KVM network
	I1028 11:54:52.281971   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found existing private KVM network mk-ha-273199
	I1028 11:54:52.282111   95151 main.go:141] libmachine: (ha-273199-m03) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.282133   95151 main.go:141] libmachine: (ha-273199-m03) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:54:52.282187   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.282077   95896 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.282257   95151 main.go:141] libmachine: (ha-273199-m03) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:54:52.559668   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.559518   95896 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa...
	I1028 11:54:52.735541   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.735336   95896 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk...
	I1028 11:54:52.735589   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing magic tar header
	I1028 11:54:52.735964   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Writing SSH key tar header
	I1028 11:54:52.736074   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:52.736016   95896 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 ...
	I1028 11:54:52.736145   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03
	I1028 11:54:52.736240   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03 (perms=drwx------)
	I1028 11:54:52.736277   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 11:54:52.736290   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:54:52.736342   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 11:54:52.736362   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 11:54:52.736375   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:54:52.736394   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 11:54:52.736406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:54:52.736415   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:54:52.736428   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:54:52.736436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Checking permissions on dir: /home
	I1028 11:54:52.736447   95151 main.go:141] libmachine: (ha-273199-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:54:52.736462   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:52.736473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Skipping /home - not owner
	I1028 11:54:52.737378   95151 main.go:141] libmachine: (ha-273199-m03) define libvirt domain using xml: 
	I1028 11:54:52.737401   95151 main.go:141] libmachine: (ha-273199-m03) <domain type='kvm'>
	I1028 11:54:52.737412   95151 main.go:141] libmachine: (ha-273199-m03)   <name>ha-273199-m03</name>
	I1028 11:54:52.737420   95151 main.go:141] libmachine: (ha-273199-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:54:52.737428   95151 main.go:141] libmachine: (ha-273199-m03)   <vcpu>2</vcpu>
	I1028 11:54:52.737434   95151 main.go:141] libmachine: (ha-273199-m03)   <features>
	I1028 11:54:52.737442   95151 main.go:141] libmachine: (ha-273199-m03)     <acpi/>
	I1028 11:54:52.737451   95151 main.go:141] libmachine: (ha-273199-m03)     <apic/>
	I1028 11:54:52.737465   95151 main.go:141] libmachine: (ha-273199-m03)     <pae/>
	I1028 11:54:52.737475   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737485   95151 main.go:141] libmachine: (ha-273199-m03)   </features>
	I1028 11:54:52.737498   95151 main.go:141] libmachine: (ha-273199-m03)   <cpu mode='host-passthrough'>
	I1028 11:54:52.737507   95151 main.go:141] libmachine: (ha-273199-m03)   
	I1028 11:54:52.737512   95151 main.go:141] libmachine: (ha-273199-m03)   </cpu>
	I1028 11:54:52.737516   95151 main.go:141] libmachine: (ha-273199-m03)   <os>
	I1028 11:54:52.737521   95151 main.go:141] libmachine: (ha-273199-m03)     <type>hvm</type>
	I1028 11:54:52.737530   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='cdrom'/>
	I1028 11:54:52.737537   95151 main.go:141] libmachine: (ha-273199-m03)     <boot dev='hd'/>
	I1028 11:54:52.737549   95151 main.go:141] libmachine: (ha-273199-m03)     <bootmenu enable='no'/>
	I1028 11:54:52.737555   95151 main.go:141] libmachine: (ha-273199-m03)   </os>
	I1028 11:54:52.737566   95151 main.go:141] libmachine: (ha-273199-m03)   <devices>
	I1028 11:54:52.737573   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='cdrom'>
	I1028 11:54:52.737605   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/boot2docker.iso'/>
	I1028 11:54:52.737626   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:54:52.737633   95151 main.go:141] libmachine: (ha-273199-m03)       <readonly/>
	I1028 11:54:52.737643   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737649   95151 main.go:141] libmachine: (ha-273199-m03)     <disk type='file' device='disk'>
	I1028 11:54:52.737657   95151 main.go:141] libmachine: (ha-273199-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:54:52.737664   95151 main.go:141] libmachine: (ha-273199-m03)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/ha-273199-m03.rawdisk'/>
	I1028 11:54:52.737674   95151 main.go:141] libmachine: (ha-273199-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:54:52.737679   95151 main.go:141] libmachine: (ha-273199-m03)     </disk>
	I1028 11:54:52.737686   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737691   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='mk-ha-273199'/>
	I1028 11:54:52.737697   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737702   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737709   95151 main.go:141] libmachine: (ha-273199-m03)     <interface type='network'>
	I1028 11:54:52.737714   95151 main.go:141] libmachine: (ha-273199-m03)       <source network='default'/>
	I1028 11:54:52.737721   95151 main.go:141] libmachine: (ha-273199-m03)       <model type='virtio'/>
	I1028 11:54:52.737725   95151 main.go:141] libmachine: (ha-273199-m03)     </interface>
	I1028 11:54:52.737736   95151 main.go:141] libmachine: (ha-273199-m03)     <serial type='pty'>
	I1028 11:54:52.737741   95151 main.go:141] libmachine: (ha-273199-m03)       <target port='0'/>
	I1028 11:54:52.737750   95151 main.go:141] libmachine: (ha-273199-m03)     </serial>
	I1028 11:54:52.737755   95151 main.go:141] libmachine: (ha-273199-m03)     <console type='pty'>
	I1028 11:54:52.737764   95151 main.go:141] libmachine: (ha-273199-m03)       <target type='serial' port='0'/>
	I1028 11:54:52.737796   95151 main.go:141] libmachine: (ha-273199-m03)     </console>
	I1028 11:54:52.737822   95151 main.go:141] libmachine: (ha-273199-m03)     <rng model='virtio'>
	I1028 11:54:52.737835   95151 main.go:141] libmachine: (ha-273199-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:54:52.737849   95151 main.go:141] libmachine: (ha-273199-m03)     </rng>
	I1028 11:54:52.737862   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737871   95151 main.go:141] libmachine: (ha-273199-m03)     
	I1028 11:54:52.737883   95151 main.go:141] libmachine: (ha-273199-m03)   </devices>
	I1028 11:54:52.737895   95151 main.go:141] libmachine: (ha-273199-m03) </domain>
	I1028 11:54:52.737906   95151 main.go:141] libmachine: (ha-273199-m03) 
	I1028 11:54:52.744674   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:8b:32:6e in network default
	I1028 11:54:52.745255   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring networks are active...
	I1028 11:54:52.745282   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:52.745947   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network default is active
	I1028 11:54:52.746212   95151 main.go:141] libmachine: (ha-273199-m03) Ensuring network mk-ha-273199 is active
	I1028 11:54:52.746662   95151 main.go:141] libmachine: (ha-273199-m03) Getting domain xml...
	I1028 11:54:52.747399   95151 main.go:141] libmachine: (ha-273199-m03) Creating domain...
	I1028 11:54:53.955503   95151 main.go:141] libmachine: (ha-273199-m03) Waiting to get IP...
	I1028 11:54:53.956506   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:53.956900   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:53.956929   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:53.956873   95896 retry.go:31] will retry after 206.527377ms: waiting for machine to come up
	I1028 11:54:54.165229   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.165718   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.165747   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.165667   95896 retry.go:31] will retry after 298.714532ms: waiting for machine to come up
	I1028 11:54:54.466211   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.466648   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.466677   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.466592   95896 retry.go:31] will retry after 313.294403ms: waiting for machine to come up
	I1028 11:54:54.781194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:54.781751   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:54.781781   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:54.781697   95896 retry.go:31] will retry after 490.276773ms: waiting for machine to come up
	I1028 11:54:55.273485   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:55.273980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:55.274010   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:55.273908   95896 retry.go:31] will retry after 747.967363ms: waiting for machine to come up
	I1028 11:54:56.023947   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.024406   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.024436   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.024354   95896 retry.go:31] will retry after 879.955575ms: waiting for machine to come up
	I1028 11:54:56.905338   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:56.905786   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:56.905854   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:56.905727   95896 retry.go:31] will retry after 900.403526ms: waiting for machine to come up
	I1028 11:54:57.807987   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:57.808508   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:57.808532   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:57.808456   95896 retry.go:31] will retry after 915.528727ms: waiting for machine to come up
	I1028 11:54:58.725704   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:54:58.726141   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:54:58.726171   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:54:58.726079   95896 retry.go:31] will retry after 1.589094397s: waiting for machine to come up
	I1028 11:55:00.316739   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:00.317159   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:00.317192   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:00.317103   95896 retry.go:31] will retry after 2.113867198s: waiting for machine to come up
	I1028 11:55:02.432898   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:02.433399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:02.433425   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:02.433344   95896 retry.go:31] will retry after 2.28050393s: waiting for machine to come up
	I1028 11:55:04.716742   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:04.717181   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:04.717204   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:04.717143   95896 retry.go:31] will retry after 2.249398536s: waiting for machine to come up
	I1028 11:55:06.969577   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:06.970058   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:06.970080   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:06.970033   95896 retry.go:31] will retry after 2.958136846s: waiting for machine to come up
	I1028 11:55:09.929637   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:09.930041   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find current IP address of domain ha-273199-m03 in network mk-ha-273199
	I1028 11:55:09.930070   95151 main.go:141] libmachine: (ha-273199-m03) DBG | I1028 11:55:09.929982   95896 retry.go:31] will retry after 4.070894756s: waiting for machine to come up
	I1028 11:55:14.002837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003301   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has current primary IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.003323   95151 main.go:141] libmachine: (ha-273199-m03) Found IP for machine: 192.168.39.14
	I1028 11:55:14.003336   95151 main.go:141] libmachine: (ha-273199-m03) Reserving static IP address...
	I1028 11:55:14.003697   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "ha-273199-m03", mac: "52:54:00:46:1d:e9", ip: "192.168.39.14"} in network mk-ha-273199
	I1028 11:55:14.078161   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:14.078198   95151 main.go:141] libmachine: (ha-273199-m03) Reserved static IP address: 192.168.39.14
	I1028 11:55:14.078221   95151 main.go:141] libmachine: (ha-273199-m03) Waiting for SSH to be available...
	I1028 11:55:14.080426   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:14.080837   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199
	I1028 11:55:14.080864   95151 main.go:141] libmachine: (ha-273199-m03) DBG | unable to find defined IP address of network mk-ha-273199 interface with MAC address 52:54:00:46:1d:e9
	I1028 11:55:14.080998   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:14.081020   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:14.081088   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:14.081126   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:14.081172   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:14.084960   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: exit status 255: 
	I1028 11:55:14.084981   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 11:55:14.084988   95151 main.go:141] libmachine: (ha-273199-m03) DBG | command : exit 0
	I1028 11:55:14.084993   95151 main.go:141] libmachine: (ha-273199-m03) DBG | err     : exit status 255
	I1028 11:55:14.084999   95151 main.go:141] libmachine: (ha-273199-m03) DBG | output  : 
	I1028 11:55:17.085220   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Getting to WaitForSSH function...
	I1028 11:55:17.087584   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.087980   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.088014   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.088124   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH client type: external
	I1028 11:55:17.088151   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa (-rw-------)
	I1028 11:55:17.088186   95151 main.go:141] libmachine: (ha-273199-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:55:17.088203   95151 main.go:141] libmachine: (ha-273199-m03) DBG | About to run SSH command:
	I1028 11:55:17.088242   95151 main.go:141] libmachine: (ha-273199-m03) DBG | exit 0
	I1028 11:55:17.219250   95151 main.go:141] libmachine: (ha-273199-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:55:17.219518   95151 main.go:141] libmachine: (ha-273199-m03) KVM machine creation complete!
	I1028 11:55:17.219876   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:17.220483   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220685   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:17.220845   95151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:55:17.220861   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetState
	I1028 11:55:17.222309   95151 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:55:17.222328   95151 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:55:17.222335   95151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:55:17.222343   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.224588   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.224925   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.224952   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.225089   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.225238   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225410   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.225535   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.225685   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.225933   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.225948   95151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:55:17.334782   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.334812   95151 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:55:17.334821   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.337833   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338269   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.338297   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.338479   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.338845   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339007   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.339176   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.339341   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.339539   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.339557   95151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:55:17.451978   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:55:17.452046   95151 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:55:17.452059   95151 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:55:17.452070   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452277   95151 buildroot.go:166] provisioning hostname "ha-273199-m03"
	I1028 11:55:17.452288   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.452476   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.455103   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455535   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.455562   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.455708   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.455867   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.455984   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.456067   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.456198   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.456408   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.456424   95151 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199-m03 && echo "ha-273199-m03" | sudo tee /etc/hostname
	I1028 11:55:17.580666   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199-m03
	
	I1028 11:55:17.580700   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.583194   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583511   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.583528   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.583802   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.584016   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584194   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.584336   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.584491   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:17.584694   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:17.584718   95151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:55:17.704448   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:55:17.704483   95151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 11:55:17.704502   95151 buildroot.go:174] setting up certificates
	I1028 11:55:17.704515   95151 provision.go:84] configureAuth start
	I1028 11:55:17.704525   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetMachineName
	I1028 11:55:17.704814   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:17.707324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707661   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.707690   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.707847   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.710530   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710812   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.710834   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.710987   95151 provision.go:143] copyHostCerts
	I1028 11:55:17.711016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711055   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 11:55:17.711067   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 11:55:17.711144   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 11:55:17.711240   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711266   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 11:55:17.711274   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 11:55:17.711309   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 11:55:17.711375   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711397   95151 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 11:55:17.711406   95151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 11:55:17.711441   95151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 11:55:17.711512   95151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199-m03 san=[127.0.0.1 192.168.39.14 ha-273199-m03 localhost minikube]
	I1028 11:55:17.872732   95151 provision.go:177] copyRemoteCerts
	I1028 11:55:17.872791   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:55:17.872822   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:17.875766   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876231   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:17.876275   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:17.876474   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:17.876674   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:17.876862   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:17.877007   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:17.961016   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:55:17.961081   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 11:55:17.984138   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:55:17.984226   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:55:18.008131   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:55:18.008227   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:55:18.031369   95151 provision.go:87] duration metric: took 326.838997ms to configureAuth
	I1028 11:55:18.031405   95151 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:55:18.031687   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:18.031768   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.034245   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034499   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.034512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.034834   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.035030   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035212   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.035366   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.035511   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.035733   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.035755   95151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:55:18.272929   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:55:18.272957   95151 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:55:18.272965   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetURL
	I1028 11:55:18.274324   95151 main.go:141] libmachine: (ha-273199-m03) DBG | Using libvirt version 6000000
	I1028 11:55:18.276917   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277260   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.277286   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.277469   95151 main.go:141] libmachine: Docker is up and running!
	I1028 11:55:18.277495   95151 main.go:141] libmachine: Reticulating splines...
	I1028 11:55:18.277503   95151 client.go:171] duration metric: took 25.997791015s to LocalClient.Create
	I1028 11:55:18.277533   95151 start.go:167] duration metric: took 25.997864783s to libmachine.API.Create "ha-273199"
	I1028 11:55:18.277545   95151 start.go:293] postStartSetup for "ha-273199-m03" (driver="kvm2")
	I1028 11:55:18.277554   95151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:55:18.277570   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.277772   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:55:18.277797   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.280107   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280473   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.280500   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.280672   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.280818   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.280972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.281096   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.364949   95151 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:55:18.368679   95151 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:55:18.368702   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 11:55:18.368765   95151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 11:55:18.368831   95151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 11:55:18.368841   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 11:55:18.368936   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:55:18.377576   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:18.398595   95151 start.go:296] duration metric: took 121.036125ms for postStartSetup
	I1028 11:55:18.398663   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetConfigRaw
	I1028 11:55:18.399226   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.401512   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.401817   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.401845   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.402086   95151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 11:55:18.402271   95151 start.go:128] duration metric: took 26.1416351s to createHost
	I1028 11:55:18.402293   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.404399   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404785   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.404814   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.404972   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.405120   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405233   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.405349   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.405479   95151 main.go:141] libmachine: Using SSH client type: native
	I1028 11:55:18.405697   95151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1028 11:55:18.405707   95151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:55:18.516101   95151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116518.496273878
	
	I1028 11:55:18.516127   95151 fix.go:216] guest clock: 1730116518.496273878
	I1028 11:55:18.516135   95151 fix.go:229] Guest: 2024-10-28 11:55:18.496273878 +0000 UTC Remote: 2024-10-28 11:55:18.402282303 +0000 UTC m=+140.534554028 (delta=93.991575ms)
	I1028 11:55:18.516153   95151 fix.go:200] guest clock delta is within tolerance: 93.991575ms
	I1028 11:55:18.516160   95151 start.go:83] releasing machines lock for "ha-273199-m03", held for 26.255640766s
	I1028 11:55:18.516185   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.516440   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:18.519412   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.519815   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.519848   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.524337   95151 out.go:177] * Found network options:
	I1028 11:55:18.525743   95151 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.225
	W1028 11:55:18.527126   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.527158   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.527179   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527726   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.527918   95151 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 11:55:18.528047   95151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:55:18.528091   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	W1028 11:55:18.528116   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:55:18.528141   95151 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:55:18.528213   95151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:55:18.528236   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 11:55:18.531068   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531433   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531460   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531507   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.531598   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.531771   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.531976   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:18.531993   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532001   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:18.532119   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 11:55:18.532160   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.532259   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 11:55:18.532384   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 11:55:18.532522   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 11:55:18.778405   95151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:55:18.783655   95151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:55:18.783756   95151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:55:18.797677   95151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:55:18.797700   95151 start.go:495] detecting cgroup driver to use...
	I1028 11:55:18.797761   95151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:55:18.814061   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:55:18.825773   95151 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:55:18.825825   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:55:18.837935   95151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:55:18.849554   95151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:55:18.965481   95151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:55:19.099249   95151 docker.go:233] disabling docker service ...
	I1028 11:55:19.099323   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:55:19.113114   95151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:55:19.124849   95151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:55:19.250769   95151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:55:19.359879   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:55:19.373349   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:55:19.389521   95151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:55:19.389615   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.398854   95151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:55:19.398906   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.407802   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.417192   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.427164   95151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:55:19.436640   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.445835   95151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.462270   95151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:55:19.471609   95151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:55:19.480345   95151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:55:19.480383   95151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:55:19.492803   95151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:55:19.501227   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:19.617782   95151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:55:19.703544   95151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:55:19.703660   95151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:55:19.708269   95151 start.go:563] Will wait 60s for crictl version
	I1028 11:55:19.708326   95151 ssh_runner.go:195] Run: which crictl
	I1028 11:55:19.712086   95151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:55:19.749930   95151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:55:19.750010   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.775811   95151 ssh_runner.go:195] Run: crio --version
	I1028 11:55:19.801952   95151 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:55:19.803114   95151 out.go:177]   - env NO_PROXY=192.168.39.208
	I1028 11:55:19.804273   95151 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.225
	I1028 11:55:19.805417   95151 main.go:141] libmachine: (ha-273199-m03) Calling .GetIP
	I1028 11:55:19.808218   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808625   95151 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 11:55:19.808655   95151 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 11:55:19.808919   95151 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:55:19.812627   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:19.824073   95151 mustload.go:65] Loading cluster: ha-273199
	I1028 11:55:19.824319   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:19.824582   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.824620   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.838910   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I1028 11:55:19.839306   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.839763   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.839782   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.840142   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.840307   95151 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 11:55:19.841569   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:19.841856   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:19.841897   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:19.855881   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1028 11:55:19.856375   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:19.856826   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:19.856843   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:19.857163   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:19.857327   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:19.857467   95151 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.14
	I1028 11:55:19.857480   95151 certs.go:194] generating shared ca certs ...
	I1028 11:55:19.857496   95151 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.857646   95151 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 11:55:19.857702   95151 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 11:55:19.857720   95151 certs.go:256] generating profile certs ...
	I1028 11:55:19.857827   95151 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 11:55:19.857863   95151 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7
	I1028 11:55:19.857891   95151 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 11:55:19.946624   95151 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 ...
	I1028 11:55:19.946653   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7: {Name:mk3236f0712e0310e6a0f8a3941b2eeadd0570c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946816   95151 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 ...
	I1028 11:55:19.946829   95151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7: {Name:mka0c613afe4278aca8a4ff26ddba521c4e341b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:55:19.946908   95151 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 11:55:19.947042   95151 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.510b5ff7 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 11:55:19.947166   95151 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 11:55:19.947182   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:55:19.947196   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:55:19.947208   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:55:19.947221   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:55:19.947233   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:55:19.947245   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:55:19.947256   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:55:19.967716   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:55:19.967802   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 11:55:19.967847   95151 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 11:55:19.967864   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 11:55:19.967899   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 11:55:19.967933   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:55:19.967965   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 11:55:19.968019   95151 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 11:55:19.968051   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 11:55:19.968066   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 11:55:19.968076   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:19.968113   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:19.971063   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971502   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:19.971527   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:19.971715   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:19.971902   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:19.972073   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:19.972212   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:20.047980   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:55:20.052462   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:55:20.063257   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:55:20.067603   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1028 11:55:20.083360   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:55:20.087209   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:55:20.096958   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:55:20.100595   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:55:20.113829   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:55:20.117648   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:55:20.126859   95151 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:55:20.130471   95151 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:55:20.139759   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:55:20.167843   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:55:20.191233   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:55:20.214438   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:55:20.235571   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:55:20.261436   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:55:20.285034   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:55:20.310624   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:55:20.332555   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 11:55:20.354176   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 11:55:20.374974   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:55:20.396001   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:55:20.411032   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1028 11:55:20.426186   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:55:20.441112   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:55:20.456730   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:55:20.472441   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:55:20.488012   95151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:55:20.502635   95151 ssh_runner.go:195] Run: openssl version
	I1028 11:55:20.508164   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 11:55:20.519601   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523711   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.523777   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 11:55:20.529016   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 11:55:20.538537   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 11:55:20.548100   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552319   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.552375   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 11:55:20.557900   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:55:20.567792   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:55:20.577338   95151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581264   95151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.581323   95151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:55:20.586529   95151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:55:20.596428   95151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:55:20.600115   95151 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:55:20.600167   95151 kubeadm.go:934] updating node {m03 192.168.39.14 8443 v1.31.2 crio true true} ...
	I1028 11:55:20.600258   95151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:55:20.600291   95151 kube-vip.go:115] generating kube-vip config ...
	I1028 11:55:20.600325   95151 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:55:20.616989   95151 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:55:20.617099   95151 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:55:20.617151   95151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.626357   95151 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:55:20.626409   95151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:55:20.634842   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:55:20.634876   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634922   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:55:20.634942   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:55:20.634948   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.634853   95151 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:55:20.635007   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:55:20.635050   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:55:20.638692   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:55:20.638722   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:55:20.663836   95151 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.663872   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:55:20.663905   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:55:20.663970   95151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:55:20.699827   95151 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:55:20.699877   95151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:55:21.384145   95151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:55:21.393997   95151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:55:21.409884   95151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:55:21.425811   95151 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:55:21.441992   95151 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:55:21.445803   95151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:55:21.457453   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:21.579499   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:21.596582   95151 host.go:66] Checking if "ha-273199" exists ...
	I1028 11:55:21.597031   95151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:55:21.597081   95151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:55:21.612568   95151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1028 11:55:21.613014   95151 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:55:21.613608   95151 main.go:141] libmachine: Using API Version  1
	I1028 11:55:21.613636   95151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:55:21.613983   95151 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:55:21.614133   95151 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 11:55:21.614251   95151 start.go:317] joinCluster: &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:55:21.614418   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:55:21.614445   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 11:55:21.617174   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617565   95151 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 11:55:21.617589   95151 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 11:55:21.617762   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 11:55:21.617923   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 11:55:21.618054   95151 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 11:55:21.618200   95151 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 11:55:21.766904   95151 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:21.766967   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I1028 11:55:42.707746   95151 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j0glvo.rmlrnzj0xpvqg3aw --discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-273199-m03 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (20.940747813s)
	I1028 11:55:42.707786   95151 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:55:43.259520   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-273199-m03 minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=ha-273199 minikube.k8s.io/primary=false
	I1028 11:55:43.364349   95151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-273199-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:55:43.486876   95151 start.go:319] duration metric: took 21.872622243s to joinCluster
	I1028 11:55:43.486974   95151 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:55:43.487346   95151 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:55:43.488385   95151 out.go:177] * Verifying Kubernetes components...
	I1028 11:55:43.489624   95151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:55:43.714323   95151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:55:43.797310   95151 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:55:43.797585   95151 kapi.go:59] client config for ha-273199: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.crt", KeyFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key", CAFile:"/home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:55:43.797659   95151 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I1028 11:55:43.797894   95151 node_ready.go:35] waiting up to 6m0s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:55:43.797978   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:43.797989   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:43.797999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:43.798002   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:43.801478   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.298184   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.298206   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.298216   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.298222   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.301984   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:44.798900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:44.798925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:44.798933   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:44.798937   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:44.802625   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.298286   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.298308   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.298316   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.298323   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.301749   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.798575   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:45.798599   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:45.798606   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:45.798609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:45.801730   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:45.802260   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:46.298797   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.298831   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.298843   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.298848   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.301856   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:46.798975   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:46.798994   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:46.799003   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:46.799009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:46.802334   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.298943   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.298969   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.298981   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.298987   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.302012   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.799134   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:47.799156   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:47.799164   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:47.799170   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:47.802967   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:47.803491   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:48.298732   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.298760   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.298772   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.298778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.302148   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:48.799142   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:48.799170   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:48.799182   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:48.799190   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:48.802961   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.298717   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.298741   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.298752   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.298759   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.302024   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:49.798693   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:49.798713   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:49.798721   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:49.798726   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:49.832585   95151 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1028 11:55:49.833180   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:50.298166   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.298188   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.298197   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.298201   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.301302   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:50.798073   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:50.798095   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:50.798104   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:50.798108   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:50.803748   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:55:51.298872   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.298899   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.298910   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.298913   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.301397   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:51.798388   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:51.798420   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:51.798428   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:51.798434   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:51.801659   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:52.298527   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.298549   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.298561   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.298565   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.301585   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:52.302112   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:52.798187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:52.798212   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:52.798223   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:52.798228   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:52.801528   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.298514   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.298542   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.298550   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.298554   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.301689   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:53.798539   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:53.798559   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:53.798574   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:53.798578   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:53.801491   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:54.298293   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.298317   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.298325   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.298330   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.302064   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:54.302719   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:54.798749   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:54.798769   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:54.798778   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:54.798783   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:54.801841   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.298678   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.298701   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.298712   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.298716   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.302094   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:55.798085   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:55.798105   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:55.798113   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:55.798116   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:55.800935   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:56.298920   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.298949   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.298958   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.298962   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.302100   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.798358   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:56.798381   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:56.798390   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:56.798394   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:56.801648   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:56.802259   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:57.298900   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.298925   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.298937   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.298943   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.301768   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:57.798111   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:57.798136   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:57.798148   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:57.798154   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:57.802245   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:55:58.299121   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.299149   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.299162   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.299171   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.302703   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:58.798590   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:58.798615   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:58.798628   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:58.798634   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:58.801208   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:55:59.299008   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.299036   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.299047   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.299054   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.302735   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:55:59.303420   95151 node_ready.go:53] node "ha-273199-m03" has status "Ready":"False"
	I1028 11:55:59.798874   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:55:59.798896   95151 round_trippers.go:469] Request Headers:
	I1028 11:55:59.798903   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:55:59.798907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:55:59.802046   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.298533   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.298555   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.298562   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.298567   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.301628   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:00.798592   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:00.798612   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:00.798619   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:00.798623   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:00.801213   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.298108   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.298133   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.298143   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.298148   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301184   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.301784   95151 node_ready.go:49] node "ha-273199-m03" has status "Ready":"True"
	I1028 11:56:01.301805   95151 node_ready.go:38] duration metric: took 17.503895303s for node "ha-273199-m03" to be "Ready" ...
	I1028 11:56:01.301814   95151 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:01.301887   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:01.301896   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.301903   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.301911   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.308580   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:01.316771   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.316873   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7rnn9
	I1028 11:56:01.316885   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.316900   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.316907   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.320308   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.320987   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.321003   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.321013   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.321019   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.323787   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.324347   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.324365   95151 pod_ready.go:82] duration metric: took 7.565058ms for pod "coredns-7c65d6cfc9-7rnn9" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324373   95151 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.324419   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hc26g
	I1028 11:56:01.324427   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.324433   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.324439   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.326735   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.327335   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.327355   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.327365   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.327373   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.329530   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.330057   95151 pod_ready.go:93] pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.330074   95151 pod_ready.go:82] duration metric: took 5.693547ms for pod "coredns-7c65d6cfc9-hc26g" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330086   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.330136   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199
	I1028 11:56:01.330146   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.330155   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.330165   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.332526   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.332999   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:01.333016   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.333027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.333032   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.334989   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.335422   95151 pod_ready.go:93] pod "etcd-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.335440   95151 pod_ready.go:82] duration metric: took 5.348301ms for pod "etcd-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335448   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.335488   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m02
	I1028 11:56:01.335496   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.335502   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.335506   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.337739   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:01.338582   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:01.338597   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.338604   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.338609   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.340562   95151 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:56:01.341152   95151 pod_ready.go:93] pod "etcd-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.341169   95151 pod_ready.go:82] duration metric: took 5.715551ms for pod "etcd-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.341177   95151 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.498553   95151 request.go:632] Waited for 157.309109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498638   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-273199-m03
	I1028 11:56:01.498650   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.498660   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.498665   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.501894   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.699071   95151 request.go:632] Waited for 196.385515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699155   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:01.699161   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.699169   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.699174   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.702324   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:01.702894   95151 pod_ready.go:93] pod "etcd-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:01.702916   95151 pod_ready.go:82] duration metric: took 361.733856ms for pod "etcd-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.702934   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:01.898705   95151 request.go:632] Waited for 195.691939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898957   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199
	I1028 11:56:01.898985   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:01.898999   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:01.899009   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:01.902374   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.098254   95151 request.go:632] Waited for 195.287162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098328   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:02.098335   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.098347   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.098353   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.101196   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.101738   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.101763   95151 pod_ready.go:82] duration metric: took 398.820372ms for pod "kube-apiserver-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.101781   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.298212   95151 request.go:632] Waited for 196.275952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298275   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m02
	I1028 11:56:02.298281   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.298290   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.298301   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.301860   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.499036   95151 request.go:632] Waited for 196.376254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499126   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:02.499138   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.499147   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.499155   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.502306   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.502777   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.502797   95151 pod_ready.go:82] duration metric: took 401.004802ms for pod "kube-apiserver-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.502809   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.698962   95151 request.go:632] Waited for 196.058055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699040   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-273199-m03
	I1028 11:56:02.699049   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.699060   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.699069   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.702304   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:02.898265   95151 request.go:632] Waited for 195.32967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898332   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:02.898337   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:02.898346   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:02.898349   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:02.901285   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:02.901755   95151 pod_ready.go:93] pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:02.901774   95151 pod_ready.go:82] duration metric: took 398.957477ms for pod "kube-apiserver-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:02.901786   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.098215   95151 request.go:632] Waited for 196.338003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098302   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199
	I1028 11:56:03.098312   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.098326   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.098336   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.101391   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.299109   95151 request.go:632] Waited for 197.052748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299187   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:03.299198   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.299211   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.299219   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.302429   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.303124   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.303143   95151 pod_ready.go:82] duration metric: took 401.346731ms for pod "kube-controller-manager-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.303154   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.499186   95151 request.go:632] Waited for 195.929738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499255   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m02
	I1028 11:56:03.499260   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.499268   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.499283   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.502463   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.698544   95151 request.go:632] Waited for 195.349647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698622   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:03.698627   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.698635   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.698642   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.701741   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:03.702403   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:03.702426   95151 pod_ready.go:82] duration metric: took 399.264829ms for pod "kube-controller-manager-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.702441   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:03.898913   95151 request.go:632] Waited for 196.399022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899002   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-273199-m03
	I1028 11:56:03.899011   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:03.899023   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:03.899029   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:03.902056   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.099025   95151 request.go:632] Waited for 196.30082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099105   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.099116   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.099127   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.099137   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.102284   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.102800   95151 pod_ready.go:93] pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.102822   95151 pod_ready.go:82] duration metric: took 400.371733ms for pod "kube-controller-manager-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.102837   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.299058   95151 request.go:632] Waited for 196.137259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299139   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g4h7
	I1028 11:56:04.299144   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.299153   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.299157   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.302746   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.499079   95151 request.go:632] Waited for 195.393701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499163   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:04.499171   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.499185   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.499195   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.503387   95151 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:56:04.504037   95151 pod_ready.go:93] pod "kube-proxy-9g4h7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.504061   95151 pod_ready.go:82] duration metric: took 401.216048ms for pod "kube-proxy-9g4h7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.504076   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.698976   95151 request.go:632] Waited for 194.814472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699062   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrzn7
	I1028 11:56:04.699071   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.699079   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.699084   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.702055   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:04.898609   95151 request.go:632] Waited for 195.739677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898675   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:04.898683   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:04.898693   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:04.898700   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:04.901923   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:04.902584   95151 pod_ready.go:93] pod "kube-proxy-nrzn7" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:04.902605   95151 pod_ready.go:82] duration metric: took 398.518978ms for pod "kube-proxy-nrzn7" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:04.902614   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.098688   95151 request.go:632] Waited for 195.978821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098754   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr5vf
	I1028 11:56:05.098759   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.098768   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.098778   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.102003   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.298290   95151 request.go:632] Waited for 195.293864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298361   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.298369   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.298380   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.298386   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.301816   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.302344   95151 pod_ready.go:93] pod "kube-proxy-tr5vf" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.302364   95151 pod_ready.go:82] duration metric: took 399.743307ms for pod "kube-proxy-tr5vf" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.302375   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.498499   95151 request.go:632] Waited for 196.032121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498559   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199
	I1028 11:56:05.498565   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.498572   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.498584   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.501658   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.698555   95151 request.go:632] Waited for 196.349621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698630   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199
	I1028 11:56:05.698639   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.698659   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.698670   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.701856   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:05.702478   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:05.702502   95151 pod_ready.go:82] duration metric: took 400.117869ms for pod "kube-scheduler-ha-273199" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.702516   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:05.898432   95151 request.go:632] Waited for 195.801686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898504   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m02
	I1028 11:56:05.898512   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:05.898523   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:05.898535   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:05.901090   95151 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:56:06.099148   95151 request.go:632] Waited for 197.39166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099243   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m02
	I1028 11:56:06.099256   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.099266   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.099273   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.102573   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.103298   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.103317   95151 pod_ready.go:82] duration metric: took 400.794152ms for pod "kube-scheduler-ha-273199-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.103328   95151 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.298494   95151 request.go:632] Waited for 195.077295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298597   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-273199-m03
	I1028 11:56:06.298623   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.298634   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.298639   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.301973   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.499177   95151 request.go:632] Waited for 196.369372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499245   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-273199-m03
	I1028 11:56:06.499253   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.499263   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.499271   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.503129   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.503622   95151 pod_ready.go:93] pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:56:06.503653   95151 pod_ready.go:82] duration metric: took 400.317222ms for pod "kube-scheduler-ha-273199-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:56:06.503666   95151 pod_ready.go:39] duration metric: took 5.2018361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:56:06.503683   95151 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:56:06.503735   95151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:56:06.519167   95151 api_server.go:72] duration metric: took 23.032149937s to wait for apiserver process to appear ...
	I1028 11:56:06.519193   95151 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:56:06.519218   95151 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1028 11:56:06.524148   95151 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1028 11:56:06.524235   95151 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I1028 11:56:06.524247   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.524259   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.524269   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.525138   95151 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:56:06.525206   95151 api_server.go:141] control plane version: v1.31.2
	I1028 11:56:06.525222   95151 api_server.go:131] duration metric: took 6.021057ms to wait for apiserver health ...
	I1028 11:56:06.525232   95151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:56:06.698920   95151 request.go:632] Waited for 173.589854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699014   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:06.699026   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.699037   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.699046   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.705719   95151 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:56:06.711799   95151 system_pods.go:59] 24 kube-system pods found
	I1028 11:56:06.711826   95151 system_pods.go:61] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:06.711831   95151 system_pods.go:61] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:06.711834   95151 system_pods.go:61] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:06.711837   95151 system_pods.go:61] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:06.711840   95151 system_pods.go:61] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:06.711845   95151 system_pods.go:61] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:06.711849   95151 system_pods.go:61] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:06.711858   95151 system_pods.go:61] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:06.711864   95151 system_pods.go:61] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:06.711869   95151 system_pods.go:61] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:06.711877   95151 system_pods.go:61] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:06.711884   95151 system_pods.go:61] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:06.711893   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:06.711901   95151 system_pods.go:61] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:06.711906   95151 system_pods.go:61] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:06.711911   95151 system_pods.go:61] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:06.711917   95151 system_pods.go:61] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:06.711923   95151 system_pods.go:61] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:06.711926   95151 system_pods.go:61] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:06.711932   95151 system_pods.go:61] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:06.711935   95151 system_pods.go:61] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:06.711940   95151 system_pods.go:61] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:06.711943   95151 system_pods.go:61] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:06.711947   95151 system_pods.go:61] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:06.711955   95151 system_pods.go:74] duration metric: took 186.713107ms to wait for pod list to return data ...
	I1028 11:56:06.711967   95151 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:56:06.899177   95151 request.go:632] Waited for 187.113111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899236   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:56:06.899242   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:06.899250   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:06.899255   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:06.902353   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:06.902463   95151 default_sa.go:45] found service account: "default"
	I1028 11:56:06.902477   95151 default_sa.go:55] duration metric: took 190.499796ms for default service account to be created ...
	I1028 11:56:06.902489   95151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:56:07.098925   95151 request.go:632] Waited for 196.358925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099006   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I1028 11:56:07.099015   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.099027   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.099034   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.104802   95151 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:56:07.111244   95151 system_pods.go:86] 24 kube-system pods found
	I1028 11:56:07.111271   95151 system_pods.go:89] "coredns-7c65d6cfc9-7rnn9" [6addf18c-48d4-4b46-9695-d3c73f66dcf7] Running
	I1028 11:56:07.111276   95151 system_pods.go:89] "coredns-7c65d6cfc9-hc26g" [352843f5-74ea-4f39-9b5e-8a14206f5ef6] Running
	I1028 11:56:07.111280   95151 system_pods.go:89] "etcd-ha-273199" [c5db9f35-3a03-4f2d-b443-61f873acf7b7] Running
	I1028 11:56:07.111284   95151 system_pods.go:89] "etcd-ha-273199-m02" [13eca23c-fd7c-420c-8e5b-1043a5bf03f1] Running
	I1028 11:56:07.111287   95151 system_pods.go:89] "etcd-ha-273199-m03" [5f55a9d6-a456-429f-9b74-cb7f84972387] Running
	I1028 11:56:07.111292   95151 system_pods.go:89] "kindnet-2gldl" [669d86dc-15f1-4cda-9f16-6ebfabad12ae] Running
	I1028 11:56:07.111296   95151 system_pods.go:89] "kindnet-rz4mf" [33ad0e92-e29c-4e54-8593-7cffd69fd439] Running
	I1028 11:56:07.111301   95151 system_pods.go:89] "kindnet-ts2mp" [b44672ac-2568-491b-be32-f842e79254b3] Running
	I1028 11:56:07.111306   95151 system_pods.go:89] "kube-apiserver-ha-273199" [afee6d99-afef-42a8-9fe4-ea1ced7ee386] Running
	I1028 11:56:07.111312   95151 system_pods.go:89] "kube-apiserver-ha-273199-m02" [0455be9b-7f7b-4059-9425-f5a41debf156] Running
	I1028 11:56:07.111320   95151 system_pods.go:89] "kube-apiserver-ha-273199-m03" [c105b6cc-4d2d-41b0-b97b-b9062fefac6e] Running
	I1028 11:56:07.111326   95151 system_pods.go:89] "kube-controller-manager-ha-273199" [9d8860ec-439e-4848-8432-44a3e34f903c] Running
	I1028 11:56:07.111336   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m02" [63641fe1-6602-4ecd-8ceb-03f7febf9d90] Running
	I1028 11:56:07.111342   95151 system_pods.go:89] "kube-controller-manager-ha-273199-m03" [67649737-1ea7-469e-adca-de5256b7e1da] Running
	I1028 11:56:07.111348   95151 system_pods.go:89] "kube-proxy-9g4h7" [4899b8e5-73ce-487e-81ca-f833a1dc900b] Running
	I1028 11:56:07.111354   95151 system_pods.go:89] "kube-proxy-nrzn7" [578d9ded-2c52-4040-a934-c348fe1ea8f5] Running
	I1028 11:56:07.111358   95151 system_pods.go:89] "kube-proxy-tr5vf" [1523079e-d7eb-432d-8023-83ac95c1c853] Running
	I1028 11:56:07.111364   95151 system_pods.go:89] "kube-scheduler-ha-273199" [7c2503ac-4e50-4829-bfeb-f3765c344f16] Running
	I1028 11:56:07.111368   95151 system_pods.go:89] "kube-scheduler-ha-273199-m02" [21107dcc-cbf5-4ab7-9787-b6b7ab0fccb3] Running
	I1028 11:56:07.111374   95151 system_pods.go:89] "kube-scheduler-ha-273199-m03" [32dacfe3-eedd-4564-a021-d4034949407b] Running
	I1028 11:56:07.111377   95151 system_pods.go:89] "kube-vip-ha-273199" [33c167cc-4d0a-4527-bc17-c160af45503c] Running
	I1028 11:56:07.111386   95151 system_pods.go:89] "kube-vip-ha-273199-m02" [f4630f40-f1bf-481c-86e3-4afabf60ad22] Running
	I1028 11:56:07.111391   95151 system_pods.go:89] "kube-vip-ha-273199-m03" [ff0e1725-49da-4769-8da6-667725b79550] Running
	I1028 11:56:07.111394   95151 system_pods.go:89] "storage-provisioner" [7e8f1437-aa9b-4d11-a516-f545f55e271c] Running
	I1028 11:56:07.111402   95151 system_pods.go:126] duration metric: took 208.905709ms to wait for k8s-apps to be running ...
	I1028 11:56:07.111413   95151 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:56:07.111468   95151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:56:07.126987   95151 system_svc.go:56] duration metric: took 15.565787ms WaitForService to wait for kubelet
	I1028 11:56:07.127011   95151 kubeadm.go:582] duration metric: took 23.639999996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:56:07.127031   95151 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:56:07.298754   95151 request.go:632] Waited for 171.640481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298832   95151 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I1028 11:56:07.298839   95151 round_trippers.go:469] Request Headers:
	I1028 11:56:07.298848   95151 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:56:07.298857   95151 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:56:07.302715   95151 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:56:07.303776   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303797   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303807   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303810   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303814   95151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:56:07.303817   95151 node_conditions.go:123] node cpu capacity is 2
	I1028 11:56:07.303821   95151 node_conditions.go:105] duration metric: took 176.784967ms to run NodePressure ...
	I1028 11:56:07.303834   95151 start.go:241] waiting for startup goroutines ...
	I1028 11:56:07.303857   95151 start.go:255] writing updated cluster config ...
	I1028 11:56:07.304142   95151 ssh_runner.go:195] Run: rm -f paused
	I1028 11:56:07.355822   95151 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:56:07.357678   95151 out.go:177] * Done! kubectl is now configured to use "ha-273199" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.662321366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0824934b-31d2-43d5-8e94-f586405559d3 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.663257216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41c66073-8fc0-4999-9349-5b01e28c6198 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.663665957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116800663642602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41c66073-8fc0-4999-9349-5b01e28c6198 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.664092572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd5315b6-cd6a-439e-af18-6bd65088a324 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.664159902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd5315b6-cd6a-439e-af18-6bd65088a324 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.664417429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd5315b6-cd6a-439e-af18-6bd65088a324 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.675850057Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=944924fb-8e07-42fd-9669-5921b28ab852 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.676219068Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fnvwg,Uid:7e89846f-39f0-42a4-b343-0ae004376bc7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116568595326394,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:56:08.271095605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7e8f1437-aa9b-4d11-a516-f545f55e271c,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1730116437166402002,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T11:53:56.836966681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc26g,Uid:352843f5-74ea-4f39-9b5e-8a14206f5ef6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116437152514863,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74ea-4f39-9b5e-8a14206f5ef6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.837780003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rnn9,Uid:6addf18c-48d4-4b46-9695-d3c73f66dcf7,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730116437137041444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:56.826411741Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tr5vf,Uid:1523079e-d7eb-432d-8023-83ac95c1c853,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424827712969,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-28T11:53:43.016311556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&PodSandboxMetadata{Name:kindnet-2gldl,Uid:669d86dc-15f1-4cda-9f16-6ebfabad12ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116424826468891,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:53:43.020213220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-273199,Uid:ec1fb61a398f082d62933fd99a5e91c8,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1730116411862344870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{kubernetes.io/config.hash: ec1fb61a398f082d62933fd99a5e91c8,kubernetes.io/config.seen: 2024-10-28T11:53:31.392312295Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-273199,Uid:2afa0eef601ae02df3405cd2d523046c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411860656774,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2afa
0eef601ae02df3405cd2d523046c,kubernetes.io/config.seen: 2024-10-28T11:53:31.392311542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-273199,Uid:de3f68a446dbf81588ffdebc94e65e05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411858786132,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de3f68a446dbf81588ffdebc94e65e05,kubernetes.io/config.seen: 2024-10-28T11:53:31.392310435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-273199,Ui
d:67aa1fe51ef7e2d6640194db4db476a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411847852262,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 67aa1fe51ef7e2d6640194db4db476a0,kubernetes.io/config.seen: 2024-10-28T11:53:31.392309218Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&PodSandboxMetadata{Name:etcd-ha-273199,Uid:af5894cc6d394a4575ef924f31654a84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730116411838769279,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-273199,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: af5894cc6d394a4575ef924f31654a84,kubernetes.io/config.seen: 2024-10-28T11:53:31.392305945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=944924fb-8e07-42fd-9669-5921b28ab852 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.676721040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae3d5dc8-d35a-412d-9b33-85c3e6dcbba6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.676790526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae3d5dc8-d35a-412d-9b33-85c3e6dcbba6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.677076000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae3d5dc8-d35a-412d-9b33-85c3e6dcbba6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.699728169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99acf8c8-65b7-4b5a-80cf-4d4e5f02ccdc name=/runtime.v1.RuntimeService/Version
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.700038201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99acf8c8-65b7-4b5a-80cf-4d4e5f02ccdc name=/runtime.v1.RuntimeService/Version
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.701599534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63654587-9f99-4a48-b5ad-e0c8fd685282 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.701977242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116800701958788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63654587-9f99-4a48-b5ad-e0c8fd685282 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.702621386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6649306-d80a-443b-9c33-fd84284062e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.702670760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6649306-d80a-443b-9c33-fd84284062e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.702882955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6649306-d80a-443b-9c33-fd84284062e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.738716021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=419a36d7-cc24-4641-879b-63d474e12aa0 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.738785909Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=419a36d7-cc24-4641-879b-63d474e12aa0 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.740426474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ffa6c05-ee7d-4272-a98d-5723b870a27d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.740835890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116800740816395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ffa6c05-ee7d-4272-a98d-5723b870a27d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.741357939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9697aa8-8681-4ddc-a73b-a2b710fbbb9d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.741407527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9697aa8-8681-4ddc-a73b-a2b710fbbb9d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:00:00 ha-273199 crio[663]: time="2024-10-28 12:00:00.741627539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609ad54d4add217a403dab711011ec3d659cac6ccd74369cbdd816a01f146d08,PodSandboxId:5aab280940ba8796fb3fb2f789d02e5772324d633234812f640318583e764633,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730116570923560988,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fnvwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e89846f-39f0-42a4-b343-0ae004376bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d,PodSandboxId:a33a6d6dc5f668757a7697fd9056cee3c00c095811b2b7cbfbbb3b463f9ccbd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380182660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rnn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6addf18c-48d4-4b46-9695-d3c73f66dcf7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3,PodSandboxId:53cd5c1c15675458a9ff8deab1950e0db39cab559d1dc9616cc4808490e0c434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116437372638329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 7e8f1437-aa9b-4d11-a516-f545f55e271c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce,PodSandboxId:257fc926b128d66fc97f2b78529f25e335662363439ba9474e852e475fdae752,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116437380311534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hc26g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352843f5-74
ea-4f39-9b5e-8a14206f5ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9,PodSandboxId:ef059ce23254d5eb38777c8866a0e1e9600cab10b90ef5e11ffaebcca98473b9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301164
25263271415,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gldl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669d86dc-15f1-4cda-9f16-6ebfabad12ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a,PodSandboxId:0cbf13a852cd292d72d97ee5646b88b258c5f201ce34b057e18d9517dc3a84d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116424952598022,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr5vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1523079e-d7eb-432d-8023-83ac95c1c853,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447,PodSandboxId:cc7ea362731d6c6defc4c334d7c8b9cd8b771d703694467991007a6e099fa51d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730116414731154454,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1fb61a398f082d62933fd99a5e91c8,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df,PodSandboxId:2541db65f40ae3c63fb23d9a39a5ec638e362f6289edc73e82b9b5a48e431566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116412220871430,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3f68a446dbf81588ffdebc94e65e05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56,PodSandboxId:43ab783eb915158f24da54e7dca3caa6b897a02986e033645c50cc4717539c84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116412222544080,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aa1fe51ef7e2d6640194db4db476a0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c,PodSandboxId:737b1cd7f74b4f6f573cdce4b751f6d0486c6dd2797ca0d2dc179f134808761a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116412215353188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-273199,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2afa0eef601ae02df3405cd2d523046c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3,PodSandboxId:32e3db6238d43266729702631eedfcee3459fff524885e5705da02bf98520bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116412084689059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-273199,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5894cc6d394a4575ef924f31654a84,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9697aa8-8681-4ddc-a73b-a2b710fbbb9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	609ad54d4add2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5aab280940ba8       busybox-7dff88458-fnvwg
	fe58f2eaad87a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   257fc926b128d       coredns-7c65d6cfc9-hc26g
	74749e3632776       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a33a6d6dc5f66       coredns-7c65d6cfc9-7rnn9
	72c80fedf6643       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   53cd5c1c15675       storage-provisioner
	e082051f544c2       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   ef059ce23254d       kindnet-2gldl
	82471ae5ddf92       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   0cbf13a852cd2       kube-proxy-tr5vf
	39409b2e85012       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   cc7ea362731d6       kube-vip-ha-273199
	8b350f0da3b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   43ab783eb9151       kube-apiserver-ha-273199
	07773cb979d8f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2541db65f40ae       kube-controller-manager-ha-273199
	6fb4822a5b791       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   737b1cd7f74b4       kube-scheduler-ha-273199
	ec2df51593c58       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   32e3db6238d43       etcd-ha-273199
	
	
	==> coredns [74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d] <==
	[INFO] 10.244.1.2:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227007s
	[INFO] 10.244.1.2:38770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002925427s
	[INFO] 10.244.1.2:48927 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147448s
	[INFO] 10.244.1.2:38077 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192376s
	[INFO] 10.244.0.4:54968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.0.4:57503 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110201s
	[INFO] 10.244.0.4:34291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061267s
	[INFO] 10.244.0.4:50921 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128077s
	[INFO] 10.244.0.4:39917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062677s
	[INFO] 10.244.2.2:60183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014203s
	[INFO] 10.244.2.2:40291 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692422s
	[INFO] 10.244.2.2:46423 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149349s
	[INFO] 10.244.2.2:54634 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124106s
	[INFO] 10.244.1.2:50363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142769s
	[INFO] 10.244.1.2:35968 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225253s
	[INFO] 10.244.1.2:45996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107605s
	[INFO] 10.244.1.2:49921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093269s
	[INFO] 10.244.0.4:53024 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012322s
	[INFO] 10.244.2.2:52722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002033s
	[INFO] 10.244.2.2:57825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011394s
	[INFO] 10.244.1.2:34495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211997s
	[INFO] 10.244.1.2:44656 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288144s
	[INFO] 10.244.0.4:39255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021258s
	[INFO] 10.244.2.2:60661 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153264s
	[INFO] 10.244.2.2:45534 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088052s
	
	
	==> coredns [fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce] <==
	[INFO] 10.244.0.4:38250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001327706s
	[INFO] 10.244.0.4:43351 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111923s
	[INFO] 10.244.0.4:51500 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001177333s
	[INFO] 10.244.2.2:48939 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124212s
	[INFO] 10.244.2.2:50808 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000124833s
	[INFO] 10.244.1.2:47587 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190204s
	[INFO] 10.244.0.4:58247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001672481s
	[INFO] 10.244.0.4:37091 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169137s
	[INFO] 10.244.0.4:48641 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098052s
	[INFO] 10.244.2.2:54836 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104545s
	[INFO] 10.244.2.2:40126 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001854336s
	[INFO] 10.244.2.2:52894 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163896s
	[INFO] 10.244.2.2:35333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230414s
	[INFO] 10.244.0.4:41974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152869s
	[INFO] 10.244.0.4:36380 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062783s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048517s
	[INFO] 10.244.2.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018024s
	[INFO] 10.244.2.2:38193 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125455s
	[INFO] 10.244.1.2:33651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271979s
	[INFO] 10.244.1.2:35705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159131s
	[INFO] 10.244.0.4:48176 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111737s
	[INFO] 10.244.0.4:38598 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127464s
	[INFO] 10.244.0.4:32940 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141046s
	[INFO] 10.244.2.2:43181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212895s
	[INFO] 10.244.2.2:43421 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090558s
	
	
	==> describe nodes <==
	Name:               ha-273199
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_53_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:53:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:42 +0000   Mon, 28 Oct 2024 11:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-273199
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c1c6593d854f8388a3b75213b790ab
	  System UUID:                c4c1c659-3d85-4f83-88a3-b75213b790ab
	  Boot ID:                    1bfb0ff9-0991-4c08-97cb-b1b218815106
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-7rnn9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 coredns-7c65d6cfc9-hc26g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 etcd-ha-273199                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m23s
	  kube-system                 kindnet-2gldl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-273199             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-controller-manager-ha-273199    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-proxy-tr5vf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-273199             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-vip-ha-273199                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m30s (x7 over 6m30s)  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m30s (x8 over 6m30s)  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x8 over 6m30s)  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m23s                  kubelet          Node ha-273199 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s                  kubelet          Node ha-273199 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s                  kubelet          Node ha-273199 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node ha-273199 status is now: NodeReady
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-273199 event: Registered Node ha-273199 in Controller
	
	
	Name:               ha-273199-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_54_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:54:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:57:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:56:29 +0000   Mon, 28 Oct 2024 11:58:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-273199-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d185c9b1be043df924a5dc234d517bb
	  System UUID:                2d185c9b-1be0-43df-924a-5dc234d517bb
	  Boot ID:                    707068c3-7da2-4705-9622-6b089ce29c40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8tvkk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-273199-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m33s
	  kube-system                 kindnet-ts2mp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m35s
	  kube-system                 kube-apiserver-ha-273199-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-273199-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-nrzn7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-scheduler-ha-273199-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-273199-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node ha-273199-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s (x7 over 5m35s)  kubelet          Node ha-273199-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-273199-m02 event: Registered Node ha-273199-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-273199-m02 status is now: NodeNotReady
	
	
	Name:               ha-273199-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_55_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:55:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:55:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:56:41 +0000   Mon, 28 Oct 2024 11:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-273199-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d112805c85f46e58297ecf352114eb9
	  System UUID:                1d112805-c85f-46e5-8297-ecf352114eb9
	  Boot ID:                    07c61f8b-a2c4-4310-b7a1-41ac039bba9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g54mk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-273199-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-rz4mf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-273199-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-273199-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-9g4h7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-273199-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-273199-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-273199-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-273199-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-273199-m03 event: Registered Node ha-273199-m03 in Controller
	
	
	Name:               ha-273199-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-273199-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=ha-273199
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_56_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-273199-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:56:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:57:12 +0000   Mon, 28 Oct 2024 11:57:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-273199-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43b84cefa5dd4131ade4071e67ae7a87
	  System UUID:                43b84cef-a5dd-4131-ade4-071e67ae7a87
	  Boot ID:                    bfbeda91-dd05-4597-adc6-b479c1c2dd66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bx2hn       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-7pzm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m21s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m21s)  kubelet          Node ha-273199-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m21s)  kubelet          Node ha-273199-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-273199-m04 event: Registered Node ha-273199-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-273199-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.737052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.789015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.644647] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.122482] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.184258] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.115821] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.235503] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.601274] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.514017] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.057056] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251877] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.071885] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.801233] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.354632] kauditd_printk_skb: 38 callbacks suppressed
	[Oct28 11:54] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3] <==
	{"level":"warn","ts":"2024-10-28T12:00:00.823409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.899543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.972121Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.979418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.982881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.992861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:00.999047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.001218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.007071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.010627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.013615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.018146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.023766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.031642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.035460Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.038260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.044197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.051156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.056940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.060642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.063074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.066615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.072933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.080684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T12:00:01.099167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"c9f57bf3c0e40ace","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:00:01 up 6 min,  0 users,  load average: 0.37, 0.34, 0.18
	Linux ha-273199 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9] <==
	I1028 11:59:26.530655       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:36.531055       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:36.531126       1 main.go:300] handling current node
	I1028 11:59:36.531149       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:36.531155       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:36.531406       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:36.531425       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:36.531556       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:36.531571       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.530412       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:46.530590       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:46.531165       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:46.531265       1 main.go:300] handling current node
	I1028 11:59:46.531299       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:46.531355       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:46.531643       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:46.531670       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	I1028 11:59:56.531158       1 main.go:296] Handling node with IPs: map[192.168.39.29:{}]
	I1028 11:59:56.531235       1 main.go:323] Node ha-273199-m04 has CIDR [10.244.3.0/24] 
	I1028 11:59:56.531500       1 main.go:296] Handling node with IPs: map[192.168.39.208:{}]
	I1028 11:59:56.531591       1 main.go:300] handling current node
	I1028 11:59:56.531635       1 main.go:296] Handling node with IPs: map[192.168.39.225:{}]
	I1028 11:59:56.531654       1 main.go:323] Node ha-273199-m02 has CIDR [10.244.1.0/24] 
	I1028 11:59:56.531828       1 main.go:296] Handling node with IPs: map[192.168.39.14:{}]
	I1028 11:59:56.531853       1 main.go:323] Node ha-273199-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56] <==
	I1028 11:53:37.479954       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:53:38.366724       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:53:38.396043       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:53:38.413224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:53:42.979540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:53:43.083644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:55:40.973661       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.973734       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.741µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1028 11:55:40.974882       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.976075       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1028 11:55:40.977370       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.890629ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1028 11:56:12.749438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33980: use of closed network connection
	E1028 11:56:12.923851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33996: use of closed network connection
	E1028 11:56:13.281780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34038: use of closed network connection
	E1028 11:56:13.456851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34054: use of closed network connection
	E1028 11:56:13.625829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34076: use of closed network connection
	E1028 11:56:13.792266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34090: use of closed network connection
	E1028 11:56:13.965533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34100: use of closed network connection
	E1028 11:56:14.136211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34124: use of closed network connection
	E1028 11:56:14.414608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34162: use of closed network connection
	E1028 11:56:14.591367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34188: use of closed network connection
	E1028 11:56:14.760347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34200: use of closed network connection
	E1028 11:56:14.922486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34206: use of closed network connection
	E1028 11:56:15.092625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34220: use of closed network connection
	E1028 11:56:15.260557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34244: use of closed network connection
	
	
	==> kube-controller-manager [07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df] <==
	I1028 11:56:41.255363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.287882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.504368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:41.718228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m03"
	I1028 11:56:41.866442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.227080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-273199-m04"
	I1028 11:56:42.253788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:42.533477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199"
	I1028 11:56:43.703600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:43.733191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.386515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:44.495725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:56:51.380862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:01.630379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:57:01.650243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:02.239477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:57:12.162277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m04"
	I1028 11:58:02.262145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.262722       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-273199-m04"
	I1028 11:58:02.289111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:02.371759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.617397ms"
	I1028 11:58:02.371873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.712µs"
	I1028 11:58:03.751638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	I1028 11:58:07.489074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-273199-m02"
	
	
	==> kube-proxy [82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:53:45.160274       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:53:45.173814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E1028 11:53:45.173942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:53:45.205451       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:53:45.205509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:53:45.205540       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:53:45.207870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:53:45.208259       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:53:45.208291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:53:45.209606       1 config.go:328] "Starting node config controller"
	I1028 11:53:45.209665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:53:45.210054       1 config.go:199] "Starting service config controller"
	I1028 11:53:45.210078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:53:45.210110       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:53:45.210127       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:53:45.310570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:53:45.310626       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:53:45.310585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c] <==
	I1028 11:53:39.113228       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:55:40.277591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.278684       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 164d41fa-0fff-4f4c-8f09-011e57fc1094(kube-system/kindnet-whfj9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-whfj9"
	E1028 11:55:40.278764       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-whfj9\": pod kindnet-whfj9 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-whfj9"
	I1028 11:55:40.278832       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-whfj9" node="ha-273199-m03"
	E1028 11:55:40.294817       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.294939       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 88c92727-3ef1-4b38-9df5-771fe9917f5e(kube-system/kube-proxy-qxpt8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qxpt8"
	E1028 11:55:40.294972       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qxpt8\": pod kube-proxy-qxpt8 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-qxpt8"
	I1028 11:55:40.295047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qxpt8" node="ha-273199-m03"
	E1028 11:55:40.307670       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.307788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4899b8e5-73ce-487e-81ca-f833a1dc900b(kube-system/kube-proxy-9g4h7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9g4h7"
	E1028 11:55:40.307822       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9g4h7\": pod kube-proxy-9g4h7 is already assigned to node \"ha-273199-m03\"" pod="kube-system/kube-proxy-9g4h7"
	I1028 11:55:40.307855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9g4h7" node="ha-273199-m03"
	E1028 11:55:40.324371       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:40.324469       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6b2fd99-538e-49be-bda5-b0e1c9edb32c(kube-system/kindnet-4bn7m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4bn7m"
	E1028 11:55:40.324505       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4bn7m\": pod kindnet-4bn7m is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-4bn7m"
	I1028 11:55:40.324540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4bn7m" node="ha-273199-m03"
	E1028 11:55:42.324511       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:55:42.324607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33ad0e92-e29c-4e54-8593-7cffd69fd439(kube-system/kindnet-rz4mf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rz4mf"
	E1028 11:55:42.324641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rz4mf\": pod kindnet-rz4mf is already assigned to node \"ha-273199-m03\"" pod="kube-system/kindnet-rz4mf"
	I1028 11:55:42.324700       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rz4mf" node="ha-273199-m03"
	E1028 11:56:08.295366       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	E1028 11:56:08.295536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7e89846f-39f0-42a4-b343-0ae004376bc7(default/busybox-7dff88458-fnvwg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fnvwg"
	E1028 11:56:08.295580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fnvwg\": pod busybox-7dff88458-fnvwg is already assigned to node \"ha-273199\"" pod="default/busybox-7dff88458-fnvwg"
	I1028 11:56:08.295605       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fnvwg" node="ha-273199"
	
	
	==> kubelet <==
	Oct 28 11:58:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:58:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351743    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:38 ha-273199 kubelet[1304]: E1028 11:58:38.351767    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116718351386721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353760    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:48 ha-273199 kubelet[1304]: E1028 11:58:48.353814    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116728353377311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356841    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:58:58 ha-273199 kubelet[1304]: E1028 11:58:58.356866    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116738354862916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358886    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:08 ha-273199 kubelet[1304]: E1028 11:59:08.358944    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116748358638626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.361731    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:18 ha-273199 kubelet[1304]: E1028 11:59:18.362240    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116758361155913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363560    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:28 ha-273199 kubelet[1304]: E1028 11:59:28.363977    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116768363170991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.290570    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:59:38 ha-273199 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:59:38 ha-273199 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366212    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:38 ha-273199 kubelet[1304]: E1028 11:59:38.366235    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116778365874189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367653    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:48 ha-273199 kubelet[1304]: E1028 11:59:48.367685    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116788367307757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:58 ha-273199 kubelet[1304]: E1028 11:59:58.368849    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116798368526094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:59:58 ha-273199 kubelet[1304]: E1028 11:59:58.369186    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116798368526094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-273199 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-273199 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-273199 -v=7 --alsologtostderr: exit status 82 (2m1.763686192s)

                                                
                                                
-- stdout --
	* Stopping node "ha-273199-m04"  ...
	* Stopping node "ha-273199-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:00:02.136091  100404 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:00:02.136238  100404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:00:02.136249  100404 out.go:358] Setting ErrFile to fd 2...
	I1028 12:00:02.136253  100404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:00:02.136420  100404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:00:02.136626  100404 out.go:352] Setting JSON to false
	I1028 12:00:02.136716  100404 mustload.go:65] Loading cluster: ha-273199
	I1028 12:00:02.137116  100404 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:00:02.137201  100404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 12:00:02.137377  100404 mustload.go:65] Loading cluster: ha-273199
	I1028 12:00:02.137508  100404 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:00:02.137532  100404 stop.go:39] StopHost: ha-273199-m04
	I1028 12:00:02.137898  100404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:00:02.137950  100404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:00:02.153313  100404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I1028 12:00:02.153830  100404 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:00:02.154384  100404 main.go:141] libmachine: Using API Version  1
	I1028 12:00:02.154410  100404 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:00:02.154788  100404 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:00:02.157251  100404 out.go:177] * Stopping node "ha-273199-m04"  ...
	I1028 12:00:02.158889  100404 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:00:02.158915  100404 main.go:141] libmachine: (ha-273199-m04) Calling .DriverName
	I1028 12:00:02.159160  100404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:00:02.159193  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHHostname
	I1028 12:00:02.161756  100404 main.go:141] libmachine: (ha-273199-m04) DBG | domain ha-273199-m04 has defined MAC address 52:54:00:07:1d:3b in network mk-ha-273199
	I1028 12:00:02.162100  100404 main.go:141] libmachine: (ha-273199-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:1d:3b", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:56:29 +0000 UTC Type:0 Mac:52:54:00:07:1d:3b Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-273199-m04 Clientid:01:52:54:00:07:1d:3b}
	I1028 12:00:02.162126  100404 main.go:141] libmachine: (ha-273199-m04) DBG | domain ha-273199-m04 has defined IP address 192.168.39.29 and MAC address 52:54:00:07:1d:3b in network mk-ha-273199
	I1028 12:00:02.162268  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHPort
	I1028 12:00:02.162449  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHKeyPath
	I1028 12:00:02.162587  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHUsername
	I1028 12:00:02.162744  100404 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m04/id_rsa Username:docker}
	I1028 12:00:02.248462  100404 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:00:02.302875  100404 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:00:02.356770  100404 main.go:141] libmachine: Stopping "ha-273199-m04"...
	I1028 12:00:02.356832  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetState
	I1028 12:00:02.358391  100404 main.go:141] libmachine: (ha-273199-m04) Calling .Stop
	I1028 12:00:02.362204  100404 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 0/120
	I1028 12:00:03.425880  100404 main.go:141] libmachine: (ha-273199-m04) Calling .GetState
	I1028 12:00:03.427242  100404 main.go:141] libmachine: Machine "ha-273199-m04" was stopped.
	I1028 12:00:03.427260  100404 stop.go:75] duration metric: took 1.26837432s to stop
	I1028 12:00:03.427297  100404 stop.go:39] StopHost: ha-273199-m03
	I1028 12:00:03.427571  100404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:00:03.427617  100404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:00:03.443176  100404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34095
	I1028 12:00:03.443624  100404 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:00:03.444189  100404 main.go:141] libmachine: Using API Version  1
	I1028 12:00:03.444245  100404 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:00:03.444642  100404 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:00:03.446652  100404 out.go:177] * Stopping node "ha-273199-m03"  ...
	I1028 12:00:03.447944  100404 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:00:03.447977  100404 main.go:141] libmachine: (ha-273199-m03) Calling .DriverName
	I1028 12:00:03.448197  100404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:00:03.448222  100404 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHHostname
	I1028 12:00:03.451238  100404 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 12:00:03.451831  100404 main.go:141] libmachine: (ha-273199-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:1d:e9", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:06 +0000 UTC Type:0 Mac:52:54:00:46:1d:e9 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-273199-m03 Clientid:01:52:54:00:46:1d:e9}
	I1028 12:00:03.451858  100404 main.go:141] libmachine: (ha-273199-m03) DBG | domain ha-273199-m03 has defined IP address 192.168.39.14 and MAC address 52:54:00:46:1d:e9 in network mk-ha-273199
	I1028 12:00:03.451987  100404 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHPort
	I1028 12:00:03.452191  100404 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHKeyPath
	I1028 12:00:03.452331  100404 main.go:141] libmachine: (ha-273199-m03) Calling .GetSSHUsername
	I1028 12:00:03.452448  100404 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m03/id_rsa Username:docker}
	I1028 12:00:03.544675  100404 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:00:03.597085  100404 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:00:03.651331  100404 main.go:141] libmachine: Stopping "ha-273199-m03"...
	I1028 12:00:03.651359  100404 main.go:141] libmachine: (ha-273199-m03) Calling .GetState
	I1028 12:00:03.652805  100404 main.go:141] libmachine: (ha-273199-m03) Calling .Stop
	I1028 12:00:03.656643  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 0/120
	I1028 12:00:04.659159  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 1/120
	I1028 12:00:05.661275  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 2/120
	I1028 12:00:06.662643  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 3/120
	I1028 12:00:07.664639  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 4/120
	I1028 12:00:08.666839  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 5/120
	I1028 12:00:09.668516  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 6/120
	I1028 12:00:10.670138  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 7/120
	I1028 12:00:11.672346  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 8/120
	I1028 12:00:12.673996  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 9/120
	I1028 12:00:13.676123  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 10/120
	I1028 12:00:14.678584  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 11/120
	I1028 12:00:15.680424  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 12/120
	I1028 12:00:16.682173  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 13/120
	I1028 12:00:17.683939  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 14/120
	I1028 12:00:18.685937  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 15/120
	I1028 12:00:19.687578  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 16/120
	I1028 12:00:20.689122  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 17/120
	I1028 12:00:21.690694  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 18/120
	I1028 12:00:22.692183  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 19/120
	I1028 12:00:23.694164  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 20/120
	I1028 12:00:24.695701  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 21/120
	I1028 12:00:25.697425  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 22/120
	I1028 12:00:26.699011  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 23/120
	I1028 12:00:27.700574  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 24/120
	I1028 12:00:28.702067  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 25/120
	I1028 12:00:29.703513  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 26/120
	I1028 12:00:30.705025  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 27/120
	I1028 12:00:31.706779  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 28/120
	I1028 12:00:32.708049  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 29/120
	I1028 12:00:33.709899  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 30/120
	I1028 12:00:34.712248  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 31/120
	I1028 12:00:35.713778  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 32/120
	I1028 12:00:36.715324  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 33/120
	I1028 12:00:37.716716  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 34/120
	I1028 12:00:38.718525  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 35/120
	I1028 12:00:39.719781  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 36/120
	I1028 12:00:40.721233  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 37/120
	I1028 12:00:41.722462  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 38/120
	I1028 12:00:42.723975  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 39/120
	I1028 12:00:43.725929  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 40/120
	I1028 12:00:44.727171  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 41/120
	I1028 12:00:45.728823  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 42/120
	I1028 12:00:46.730115  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 43/120
	I1028 12:00:47.731473  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 44/120
	I1028 12:00:48.733679  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 45/120
	I1028 12:00:49.735426  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 46/120
	I1028 12:00:50.736785  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 47/120
	I1028 12:00:51.738002  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 48/120
	I1028 12:00:52.739433  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 49/120
	I1028 12:00:53.741037  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 50/120
	I1028 12:00:54.742223  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 51/120
	I1028 12:00:55.743524  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 52/120
	I1028 12:00:56.745468  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 53/120
	I1028 12:00:57.746973  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 54/120
	I1028 12:00:58.748405  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 55/120
	I1028 12:00:59.749606  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 56/120
	I1028 12:01:00.750953  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 57/120
	I1028 12:01:01.752469  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 58/120
	I1028 12:01:02.753854  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 59/120
	I1028 12:01:03.755569  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 60/120
	I1028 12:01:04.757010  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 61/120
	I1028 12:01:05.758611  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 62/120
	I1028 12:01:06.760085  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 63/120
	I1028 12:01:07.761327  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 64/120
	I1028 12:01:08.762639  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 65/120
	I1028 12:01:09.763984  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 66/120
	I1028 12:01:10.766366  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 67/120
	I1028 12:01:11.767715  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 68/120
	I1028 12:01:12.768957  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 69/120
	I1028 12:01:13.770651  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 70/120
	I1028 12:01:14.771792  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 71/120
	I1028 12:01:15.773158  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 72/120
	I1028 12:01:16.774402  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 73/120
	I1028 12:01:17.775915  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 74/120
	I1028 12:01:18.777465  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 75/120
	I1028 12:01:19.778700  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 76/120
	I1028 12:01:20.780075  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 77/120
	I1028 12:01:21.782197  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 78/120
	I1028 12:01:22.784277  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 79/120
	I1028 12:01:23.786454  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 80/120
	I1028 12:01:24.788781  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 81/120
	I1028 12:01:25.790014  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 82/120
	I1028 12:01:26.791363  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 83/120
	I1028 12:01:27.792663  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 84/120
	I1028 12:01:28.794447  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 85/120
	I1028 12:01:29.795954  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 86/120
	I1028 12:01:30.798023  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 87/120
	I1028 12:01:31.799395  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 88/120
	I1028 12:01:32.800722  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 89/120
	I1028 12:01:33.802531  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 90/120
	I1028 12:01:34.803892  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 91/120
	I1028 12:01:35.805256  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 92/120
	I1028 12:01:36.806436  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 93/120
	I1028 12:01:37.807769  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 94/120
	I1028 12:01:38.809495  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 95/120
	I1028 12:01:39.810799  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 96/120
	I1028 12:01:40.812130  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 97/120
	I1028 12:01:41.814100  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 98/120
	I1028 12:01:42.815575  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 99/120
	I1028 12:01:43.817039  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 100/120
	I1028 12:01:44.818420  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 101/120
	I1028 12:01:45.819788  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 102/120
	I1028 12:01:46.820998  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 103/120
	I1028 12:01:47.822338  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 104/120
	I1028 12:01:48.824092  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 105/120
	I1028 12:01:49.825218  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 106/120
	I1028 12:01:50.826723  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 107/120
	I1028 12:01:51.827889  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 108/120
	I1028 12:01:52.830288  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 109/120
	I1028 12:01:53.831764  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 110/120
	I1028 12:01:54.832876  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 111/120
	I1028 12:01:55.834183  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 112/120
	I1028 12:01:56.835393  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 113/120
	I1028 12:01:57.836688  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 114/120
	I1028 12:01:58.838831  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 115/120
	I1028 12:01:59.840076  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 116/120
	I1028 12:02:00.841498  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 117/120
	I1028 12:02:01.842729  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 118/120
	I1028 12:02:02.844327  100404 main.go:141] libmachine: (ha-273199-m03) Waiting for machine to stop 119/120
	I1028 12:02:03.845433  100404 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:02:03.845508  100404 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:02:03.847480  100404 out.go:201] 
	W1028 12:02:03.848728  100404 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:02:03.848743  100404 out.go:270] * 
	* 
	W1028 12:02:03.851984  100404 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:02:03.853390  100404 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-273199 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-273199 --wait=true -v=7 --alsologtostderr
E1028 12:02:13.450880   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:02:41.150687   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:04:20.376142   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:05:43.441373   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-273199 --wait=true -v=7 --alsologtostderr: (4m1.48682692s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-273199
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.934403124s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-273199 node start m02 -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-273199 -v=7                                                           | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-273199 -v=7                                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-273199 --wait=true -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-273199                                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:06 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:02:03
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:02:03.903611  100870 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:02:03.903785  100870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:03.903796  100870 out.go:358] Setting ErrFile to fd 2...
	I1028 12:02:03.903802  100870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:03.903996  100870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:02:03.904558  100870 out.go:352] Setting JSON to false
	I1028 12:02:03.905456  100870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6274,"bootTime":1730110650,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:02:03.905566  100870 start.go:139] virtualization: kvm guest
	I1028 12:02:03.908809  100870 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:02:03.910015  100870 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:02:03.910035  100870 notify.go:220] Checking for updates...
	I1028 12:02:03.912526  100870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:02:03.913660  100870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:02:03.914810  100870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:02:03.915925  100870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:02:03.916953  100870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:02:03.918563  100870 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:03.918686  100870 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:02:03.919137  100870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:03.919196  100870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:03.935019  100870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1028 12:02:03.935414  100870 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:03.936095  100870 main.go:141] libmachine: Using API Version  1
	I1028 12:02:03.936114  100870 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:03.936641  100870 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:03.936847  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:03.970864  100870 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:02:03.971854  100870 start.go:297] selected driver: kvm2
	I1028 12:02:03.971870  100870 start.go:901] validating driver "kvm2" against &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:03.972006  100870 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:02:03.972302  100870 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:03.972369  100870 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:02:03.987583  100870 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:02:03.988327  100870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:02:03.988360  100870 cni.go:84] Creating CNI manager for ""
	I1028 12:02:03.988418  100870 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 12:02:03.988475  100870 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:03.988607  100870 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:03.991064  100870 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 12:02:03.992158  100870 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:02:03.992190  100870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:02:03.992197  100870 cache.go:56] Caching tarball of preloaded images
	I1028 12:02:03.992270  100870 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:02:03.992284  100870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:02:03.992410  100870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 12:02:03.992612  100870 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:03.992670  100870 start.go:364] duration metric: took 37.357µs to acquireMachinesLock for "ha-273199"
	I1028 12:02:03.992691  100870 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:02:03.992701  100870 fix.go:54] fixHost starting: 
	I1028 12:02:03.993020  100870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:03.993075  100870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:04.008047  100870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I1028 12:02:04.008464  100870 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:04.008967  100870 main.go:141] libmachine: Using API Version  1
	I1028 12:02:04.008990  100870 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:04.009320  100870 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:04.009477  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:04.009618  100870 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 12:02:04.010988  100870 fix.go:112] recreateIfNeeded on ha-273199: state=Running err=<nil>
	W1028 12:02:04.011005  100870 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:02:04.012782  100870 out.go:177] * Updating the running kvm2 "ha-273199" VM ...
	I1028 12:02:04.014069  100870 machine.go:93] provisionDockerMachine start ...
	I1028 12:02:04.014088  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:04.014275  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.016594  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.017165  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.017185  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.017352  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.017505  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.017625  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.017783  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.017917  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.018095  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.018104  100870 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:02:04.136386  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 12:02:04.136414  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.136651  100870 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 12:02:04.136675  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.136838  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.139912  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.140326  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.140344  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.140515  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.140685  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.140835  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.140984  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.141140  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.141350  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.141363  100870 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 12:02:04.266481  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 12:02:04.266533  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.269395  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.269827  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.269857  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.269989  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.270232  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.270391  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.270485  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.270679  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.270856  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.270870  100870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:02:04.384725  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:04.384761  100870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:02:04.384787  100870 buildroot.go:174] setting up certificates
	I1028 12:02:04.384804  100870 provision.go:84] configureAuth start
	I1028 12:02:04.384821  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.385115  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:02:04.387947  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.388377  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.388399  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.388568  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.390735  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.391127  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.391153  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.391272  100870 provision.go:143] copyHostCerts
	I1028 12:02:04.391308  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:02:04.391379  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:02:04.391398  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:02:04.391491  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:02:04.391662  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:02:04.391708  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:02:04.391720  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:02:04.391772  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:02:04.391867  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:02:04.391895  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:02:04.391904  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:02:04.391944  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:02:04.392036  100870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 12:02:04.529305  100870 provision.go:177] copyRemoteCerts
	I1028 12:02:04.529387  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:02:04.529422  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.532448  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.532918  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.532950  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.533121  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.533304  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.533465  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.533578  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:02:04.618283  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 12:02:04.618357  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:02:04.641492  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 12:02:04.641570  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 12:02:04.663242  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 12:02:04.663298  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:02:04.684746  100870 provision.go:87] duration metric: took 299.924974ms to configureAuth
	I1028 12:02:04.684773  100870 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:02:04.685029  100870 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:04.685101  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.688209  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.688605  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.688626  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.688845  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.689025  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.689183  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.689309  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.689441  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.689623  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.689638  100870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:03:35.537639  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:03:35.537672  100870 machine.go:96] duration metric: took 1m31.52358774s to provisionDockerMachine
	I1028 12:03:35.537686  100870 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 12:03:35.537697  100870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:03:35.537715  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.538113  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:03:35.538149  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.541355  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.541809  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.541836  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.541968  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.542173  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.542327  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.542439  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.631492  100870 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:03:35.635456  100870 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:03:35.635475  100870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:03:35.635535  100870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:03:35.635616  100870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:03:35.635654  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 12:03:35.635754  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:03:35.645603  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:03:35.668954  100870 start.go:296] duration metric: took 131.25574ms for postStartSetup
	I1028 12:03:35.668999  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.669284  100870 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 12:03:35.669316  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.671833  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.672295  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.672314  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.672477  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.672652  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.672793  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.672921  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	W1028 12:03:35.758129  100870 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 12:03:35.758151  100870 fix.go:56] duration metric: took 1m31.765453008s for fixHost
	I1028 12:03:35.758176  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.760659  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.760992  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.761017  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.761154  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.761355  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.761537  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.761681  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.761829  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:35.762007  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:03:35.762017  100870 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:03:35.875795  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117015.848180491
	
	I1028 12:03:35.875816  100870 fix.go:216] guest clock: 1730117015.848180491
	I1028 12:03:35.875822  100870 fix.go:229] Guest: 2024-10-28 12:03:35.848180491 +0000 UTC Remote: 2024-10-28 12:03:35.758160266 +0000 UTC m=+91.893889761 (delta=90.020225ms)
	I1028 12:03:35.875842  100870 fix.go:200] guest clock delta is within tolerance: 90.020225ms
	I1028 12:03:35.875848  100870 start.go:83] releasing machines lock for "ha-273199", held for 1m31.883167163s
	I1028 12:03:35.875878  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.876092  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:03:35.878554  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.878906  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.878936  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.879054  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879516  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879708  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879808  100870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:03:35.879847  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.879914  100870 ssh_runner.go:195] Run: cat /version.json
	I1028 12:03:35.879938  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.882467  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882609  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882837  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.882867  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882987  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.883130  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.883149  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.883180  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.883308  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.883317  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.883480  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.883496  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.883609  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.883757  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.964016  100870 ssh_runner.go:195] Run: systemctl --version
	I1028 12:03:35.983445  100870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:03:36.146255  100870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:03:36.151549  100870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:03:36.151612  100870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:03:36.160010  100870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:03:36.160033  100870 start.go:495] detecting cgroup driver to use...
	I1028 12:03:36.160120  100870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:03:36.175719  100870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:03:36.189003  100870 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:03:36.189044  100870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:03:36.201055  100870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:03:36.212922  100870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:03:36.363853  100870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:03:36.497270  100870 docker.go:233] disabling docker service ...
	I1028 12:03:36.497342  100870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:03:36.512883  100870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:03:36.525570  100870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:03:36.665771  100870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:03:36.807915  100870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:03:36.821298  100870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:03:36.838231  100870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:03:36.838314  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.847790  100870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:03:36.847878  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.857561  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.867064  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.876683  100870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:03:36.887009  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.896857  100870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.907254  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.916861  100870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:03:36.925444  100870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:03:36.933997  100870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:37.066208  100870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:03:38.470285  100870 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.404034896s)
	I1028 12:03:38.470321  100870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:03:38.470380  100870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:03:38.475330  100870 start.go:563] Will wait 60s for crictl version
	I1028 12:03:38.475395  100870 ssh_runner.go:195] Run: which crictl
	I1028 12:03:38.478798  100870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:03:38.519102  100870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:03:38.519200  100870 ssh_runner.go:195] Run: crio --version
	I1028 12:03:38.546233  100870 ssh_runner.go:195] Run: crio --version
	I1028 12:03:38.575873  100870 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:03:38.577051  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:03:38.579762  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:38.580128  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:38.580156  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:38.580449  100870 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:03:38.584643  100870 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:03:38.584776  100870 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:03:38.584817  100870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:38.627991  100870 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:03:38.628025  100870 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:03:38.628083  100870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:38.661185  100870 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:03:38.661210  100870 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:03:38.661222  100870 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 12:03:38.661326  100870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:03:38.661393  100870 ssh_runner.go:195] Run: crio config
	I1028 12:03:38.717907  100870 cni.go:84] Creating CNI manager for ""
	I1028 12:03:38.717932  100870 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 12:03:38.717945  100870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:03:38.717974  100870 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:03:38.718122  100870 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:03:38.718147  100870 kube-vip.go:115] generating kube-vip config ...
	I1028 12:03:38.718200  100870 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 12:03:38.729484  100870 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 12:03:38.729594  100870 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 12:03:38.729653  100870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:03:38.738275  100870 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:03:38.738330  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 12:03:38.746621  100870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 12:03:38.761336  100870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:03:38.775885  100870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 12:03:38.791466  100870 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 12:03:38.806550  100870 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 12:03:38.810171  100870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:38.942901  100870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:03:38.956371  100870 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 12:03:38.956397  100870 certs.go:194] generating shared ca certs ...
	I1028 12:03:38.956413  100870 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:38.956556  100870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:03:38.956599  100870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:03:38.956609  100870 certs.go:256] generating profile certs ...
	I1028 12:03:38.956691  100870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 12:03:38.956717  100870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e
	I1028 12:03:38.956735  100870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 12:03:39.052446  100870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e ...
	I1028 12:03:39.052475  100870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e: {Name:mkf7d063e306797dc9e6e5ad6dc9bcb3b72bf806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:39.052639  100870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e ...
	I1028 12:03:39.052651  100870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e: {Name:mk4019b5234c52eba7352f0210ac3f3e5c064235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:39.052717  100870 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 12:03:39.052856  100870 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 12:03:39.052983  100870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 12:03:39.053000  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:03:39.053014  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:03:39.053027  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:03:39.053039  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:03:39.053050  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:03:39.053062  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:03:39.053074  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:03:39.053086  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:03:39.053138  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:03:39.053175  100870 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:03:39.053184  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:03:39.053205  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:03:39.053226  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:03:39.053249  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:03:39.053288  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:03:39.053313  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.053342  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.053368  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.053949  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:03:39.076687  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:03:39.097250  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:03:39.118090  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:03:39.139412  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:03:39.160112  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:03:39.181032  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:03:39.201432  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:03:39.222134  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:03:39.243775  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:03:39.264965  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:03:39.285580  100870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:03:39.300958  100870 ssh_runner.go:195] Run: openssl version
	I1028 12:03:39.306254  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:03:39.315512  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.319526  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.319571  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.324647  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:03:39.332921  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:03:39.342691  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.346507  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.346543  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.351391  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:03:39.359451  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:03:39.368795  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.372599  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.372640  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.377688  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:03:39.385863  100870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:03:39.389982  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:03:39.395020  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:03:39.399968  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:03:39.404924  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:03:39.410342  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:03:39.415548  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:03:39.420736  100870 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:03:39.420842  100870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:03:39.420914  100870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:03:39.456803  100870 cri.go:89] found id: "0616197f0731a6c7aecad2af81db7abe3663092ac2cc43248fae06a2b1dbc5bd"
	I1028 12:03:39.456823  100870 cri.go:89] found id: "c7b36f903237978f5a1eda292661332688e1877341a87673b1ec023014dc4c7f"
	I1028 12:03:39.456827  100870 cri.go:89] found id: "974fb0419fbfb90af409e1c6b810ef3505544800969adc0bf73405ee06b57c1c"
	I1028 12:03:39.456830  100870 cri.go:89] found id: "fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce"
	I1028 12:03:39.456833  100870 cri.go:89] found id: "74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d"
	I1028 12:03:39.456836  100870 cri.go:89] found id: "72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3"
	I1028 12:03:39.456839  100870 cri.go:89] found id: "e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9"
	I1028 12:03:39.456841  100870 cri.go:89] found id: "82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a"
	I1028 12:03:39.456844  100870 cri.go:89] found id: "39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447"
	I1028 12:03:39.456848  100870 cri.go:89] found id: "8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56"
	I1028 12:03:39.456851  100870 cri.go:89] found id: "07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df"
	I1028 12:03:39.456854  100870 cri.go:89] found id: "6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c"
	I1028 12:03:39.456869  100870 cri.go:89] found id: "ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3"
	I1028 12:03:39.456875  100870 cri.go:89] found id: ""
	I1028 12:03:39.456910  100870 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 stop -v=7 --alsologtostderr
E1028 12:07:13.449674   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-273199 stop -v=7 --alsologtostderr: exit status 82 (2m0.458517329s)

                                                
                                                
-- stdout --
	* Stopping node "ha-273199-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:06:25.009545  102645 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:06:25.009987  102645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:06:25.010039  102645 out.go:358] Setting ErrFile to fd 2...
	I1028 12:06:25.010057  102645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:06:25.010481  102645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:06:25.011041  102645 out.go:352] Setting JSON to false
	I1028 12:06:25.011118  102645 mustload.go:65] Loading cluster: ha-273199
	I1028 12:06:25.011518  102645 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:06:25.011648  102645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 12:06:25.011838  102645 mustload.go:65] Loading cluster: ha-273199
	I1028 12:06:25.011979  102645 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:06:25.012012  102645 stop.go:39] StopHost: ha-273199-m04
	I1028 12:06:25.012395  102645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:06:25.012469  102645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:06:25.027069  102645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35921
	I1028 12:06:25.027609  102645 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:06:25.028177  102645 main.go:141] libmachine: Using API Version  1
	I1028 12:06:25.028197  102645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:06:25.028607  102645 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:06:25.030946  102645 out.go:177] * Stopping node "ha-273199-m04"  ...
	I1028 12:06:25.032441  102645 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:06:25.032477  102645 main.go:141] libmachine: (ha-273199-m04) Calling .DriverName
	I1028 12:06:25.032687  102645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:06:25.032710  102645 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHHostname
	I1028 12:06:25.035274  102645 main.go:141] libmachine: (ha-273199-m04) DBG | domain ha-273199-m04 has defined MAC address 52:54:00:07:1d:3b in network mk-ha-273199
	I1028 12:06:25.035709  102645 main.go:141] libmachine: (ha-273199-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:1d:3b", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:07:1d:3b Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-273199-m04 Clientid:01:52:54:00:07:1d:3b}
	I1028 12:06:25.035737  102645 main.go:141] libmachine: (ha-273199-m04) DBG | domain ha-273199-m04 has defined IP address 192.168.39.29 and MAC address 52:54:00:07:1d:3b in network mk-ha-273199
	I1028 12:06:25.035869  102645 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHPort
	I1028 12:06:25.036031  102645 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHKeyPath
	I1028 12:06:25.036173  102645 main.go:141] libmachine: (ha-273199-m04) Calling .GetSSHUsername
	I1028 12:06:25.036306  102645 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199-m04/id_rsa Username:docker}
	I1028 12:06:25.113554  102645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:06:25.166520  102645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:06:25.218465  102645 main.go:141] libmachine: Stopping "ha-273199-m04"...
	I1028 12:06:25.218503  102645 main.go:141] libmachine: (ha-273199-m04) Calling .GetState
	I1028 12:06:25.220005  102645 main.go:141] libmachine: (ha-273199-m04) Calling .Stop
	I1028 12:06:25.223538  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 0/120
	I1028 12:06:26.225088  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 1/120
	I1028 12:06:27.226942  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 2/120
	I1028 12:06:28.228129  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 3/120
	I1028 12:06:29.229414  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 4/120
	I1028 12:06:30.231296  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 5/120
	I1028 12:06:31.232581  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 6/120
	I1028 12:06:32.233737  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 7/120
	I1028 12:06:33.235001  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 8/120
	I1028 12:06:34.236286  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 9/120
	I1028 12:06:35.238291  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 10/120
	I1028 12:06:36.239436  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 11/120
	I1028 12:06:37.240655  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 12/120
	I1028 12:06:38.242111  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 13/120
	I1028 12:06:39.243527  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 14/120
	I1028 12:06:40.245317  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 15/120
	I1028 12:06:41.246507  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 16/120
	I1028 12:06:42.247821  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 17/120
	I1028 12:06:43.249022  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 18/120
	I1028 12:06:44.250537  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 19/120
	I1028 12:06:45.252742  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 20/120
	I1028 12:06:46.254197  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 21/120
	I1028 12:06:47.255467  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 22/120
	I1028 12:06:48.256795  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 23/120
	I1028 12:06:49.258319  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 24/120
	I1028 12:06:50.260403  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 25/120
	I1028 12:06:51.261844  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 26/120
	I1028 12:06:52.263072  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 27/120
	I1028 12:06:53.264311  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 28/120
	I1028 12:06:54.266019  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 29/120
	I1028 12:06:55.267510  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 30/120
	I1028 12:06:56.268969  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 31/120
	I1028 12:06:57.270225  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 32/120
	I1028 12:06:58.271379  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 33/120
	I1028 12:06:59.272643  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 34/120
	I1028 12:07:00.274795  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 35/120
	I1028 12:07:01.276113  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 36/120
	I1028 12:07:02.277275  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 37/120
	I1028 12:07:03.278647  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 38/120
	I1028 12:07:04.280249  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 39/120
	I1028 12:07:05.282320  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 40/120
	I1028 12:07:06.283724  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 41/120
	I1028 12:07:07.285074  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 42/120
	I1028 12:07:08.286604  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 43/120
	I1028 12:07:09.288881  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 44/120
	I1028 12:07:10.291097  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 45/120
	I1028 12:07:11.292437  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 46/120
	I1028 12:07:12.293786  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 47/120
	I1028 12:07:13.295094  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 48/120
	I1028 12:07:14.296374  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 49/120
	I1028 12:07:15.298399  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 50/120
	I1028 12:07:16.299774  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 51/120
	I1028 12:07:17.301083  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 52/120
	I1028 12:07:18.302649  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 53/120
	I1028 12:07:19.304654  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 54/120
	I1028 12:07:20.306732  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 55/120
	I1028 12:07:21.308216  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 56/120
	I1028 12:07:22.309976  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 57/120
	I1028 12:07:23.311537  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 58/120
	I1028 12:07:24.313887  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 59/120
	I1028 12:07:25.315903  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 60/120
	I1028 12:07:26.318059  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 61/120
	I1028 12:07:27.319624  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 62/120
	I1028 12:07:28.321027  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 63/120
	I1028 12:07:29.322508  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 64/120
	I1028 12:07:30.324310  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 65/120
	I1028 12:07:31.325930  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 66/120
	I1028 12:07:32.327172  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 67/120
	I1028 12:07:33.328710  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 68/120
	I1028 12:07:34.330006  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 69/120
	I1028 12:07:35.332233  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 70/120
	I1028 12:07:36.334063  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 71/120
	I1028 12:07:37.335323  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 72/120
	I1028 12:07:38.336615  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 73/120
	I1028 12:07:39.338150  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 74/120
	I1028 12:07:40.340005  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 75/120
	I1028 12:07:41.341459  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 76/120
	I1028 12:07:42.342561  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 77/120
	I1028 12:07:43.343969  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 78/120
	I1028 12:07:44.346037  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 79/120
	I1028 12:07:45.348100  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 80/120
	I1028 12:07:46.349917  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 81/120
	I1028 12:07:47.351356  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 82/120
	I1028 12:07:48.352587  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 83/120
	I1028 12:07:49.354467  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 84/120
	I1028 12:07:50.356296  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 85/120
	I1028 12:07:51.357607  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 86/120
	I1028 12:07:52.358874  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 87/120
	I1028 12:07:53.360148  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 88/120
	I1028 12:07:54.361802  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 89/120
	I1028 12:07:55.363815  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 90/120
	I1028 12:07:56.366284  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 91/120
	I1028 12:07:57.367894  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 92/120
	I1028 12:07:58.370056  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 93/120
	I1028 12:07:59.372022  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 94/120
	I1028 12:08:00.373915  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 95/120
	I1028 12:08:01.375438  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 96/120
	I1028 12:08:02.377022  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 97/120
	I1028 12:08:03.378701  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 98/120
	I1028 12:08:04.380100  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 99/120
	I1028 12:08:05.381976  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 100/120
	I1028 12:08:06.383453  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 101/120
	I1028 12:08:07.384811  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 102/120
	I1028 12:08:08.386081  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 103/120
	I1028 12:08:09.387606  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 104/120
	I1028 12:08:10.389680  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 105/120
	I1028 12:08:11.390941  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 106/120
	I1028 12:08:12.392271  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 107/120
	I1028 12:08:13.394071  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 108/120
	I1028 12:08:14.395458  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 109/120
	I1028 12:08:15.397424  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 110/120
	I1028 12:08:16.398836  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 111/120
	I1028 12:08:17.400373  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 112/120
	I1028 12:08:18.401610  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 113/120
	I1028 12:08:19.402885  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 114/120
	I1028 12:08:20.404807  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 115/120
	I1028 12:08:21.406849  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 116/120
	I1028 12:08:22.408154  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 117/120
	I1028 12:08:23.409383  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 118/120
	I1028 12:08:24.410874  102645 main.go:141] libmachine: (ha-273199-m04) Waiting for machine to stop 119/120
	I1028 12:08:25.412338  102645 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:08:25.412458  102645 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:08:25.414310  102645 out.go:201] 
	W1028 12:08:25.415658  102645 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:08:25.415677  102645 out.go:270] * 
	* 
	W1028 12:08:25.419023  102645 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:08:25.420171  102645 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-273199 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr: (19.05886073s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-273199 -n ha-273199
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 logs -n 25: (1.816020618s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m04 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp testdata/cp-test.txt                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt                       |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199 sudo cat                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199.txt                                 |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m02 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n                                                                 | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | ha-273199-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-273199 ssh -n ha-273199-m03 sudo cat                                          | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|         | /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-273199 node stop m02 -v=7                                                     | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-273199 node start m02 -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 11:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-273199 -v=7                                                           | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-273199 -v=7                                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-273199 --wait=true -v=7                                                    | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-273199                                                                | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:06 UTC |                     |
	| node    | ha-273199 node delete m03 -v=7                                                   | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:06 UTC | 28 Oct 24 12:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-273199 stop -v=7                                                              | ha-273199 | jenkins | v1.34.0 | 28 Oct 24 12:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:02:03
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:02:03.903611  100870 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:02:03.903785  100870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:03.903796  100870 out.go:358] Setting ErrFile to fd 2...
	I1028 12:02:03.903802  100870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:03.903996  100870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:02:03.904558  100870 out.go:352] Setting JSON to false
	I1028 12:02:03.905456  100870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6274,"bootTime":1730110650,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:02:03.905566  100870 start.go:139] virtualization: kvm guest
	I1028 12:02:03.908809  100870 out.go:177] * [ha-273199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:02:03.910015  100870 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:02:03.910035  100870 notify.go:220] Checking for updates...
	I1028 12:02:03.912526  100870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:02:03.913660  100870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:02:03.914810  100870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:02:03.915925  100870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:02:03.916953  100870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:02:03.918563  100870 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:03.918686  100870 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:02:03.919137  100870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:03.919196  100870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:03.935019  100870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1028 12:02:03.935414  100870 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:03.936095  100870 main.go:141] libmachine: Using API Version  1
	I1028 12:02:03.936114  100870 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:03.936641  100870 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:03.936847  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:03.970864  100870 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:02:03.971854  100870 start.go:297] selected driver: kvm2
	I1028 12:02:03.971870  100870 start.go:901] validating driver "kvm2" against &{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:03.972006  100870 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:02:03.972302  100870 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:03.972369  100870 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:02:03.987583  100870 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:02:03.988327  100870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:02:03.988360  100870 cni.go:84] Creating CNI manager for ""
	I1028 12:02:03.988418  100870 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 12:02:03.988475  100870 start.go:340] cluster config:
	{Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:03.988607  100870 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:03.991064  100870 out.go:177] * Starting "ha-273199" primary control-plane node in "ha-273199" cluster
	I1028 12:02:03.992158  100870 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:02:03.992190  100870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:02:03.992197  100870 cache.go:56] Caching tarball of preloaded images
	I1028 12:02:03.992270  100870 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:02:03.992284  100870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:02:03.992410  100870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/config.json ...
	I1028 12:02:03.992612  100870 start.go:360] acquireMachinesLock for ha-273199: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:03.992670  100870 start.go:364] duration metric: took 37.357µs to acquireMachinesLock for "ha-273199"
	I1028 12:02:03.992691  100870 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:02:03.992701  100870 fix.go:54] fixHost starting: 
	I1028 12:02:03.993020  100870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:03.993075  100870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:04.008047  100870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I1028 12:02:04.008464  100870 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:04.008967  100870 main.go:141] libmachine: Using API Version  1
	I1028 12:02:04.008990  100870 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:04.009320  100870 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:04.009477  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:04.009618  100870 main.go:141] libmachine: (ha-273199) Calling .GetState
	I1028 12:02:04.010988  100870 fix.go:112] recreateIfNeeded on ha-273199: state=Running err=<nil>
	W1028 12:02:04.011005  100870 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:02:04.012782  100870 out.go:177] * Updating the running kvm2 "ha-273199" VM ...
	I1028 12:02:04.014069  100870 machine.go:93] provisionDockerMachine start ...
	I1028 12:02:04.014088  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:02:04.014275  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.016594  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.017165  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.017185  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.017352  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.017505  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.017625  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.017783  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.017917  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.018095  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.018104  100870 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:02:04.136386  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 12:02:04.136414  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.136651  100870 buildroot.go:166] provisioning hostname "ha-273199"
	I1028 12:02:04.136675  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.136838  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.139912  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.140326  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.140344  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.140515  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.140685  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.140835  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.140984  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.141140  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.141350  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.141363  100870 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-273199 && echo "ha-273199" | sudo tee /etc/hostname
	I1028 12:02:04.266481  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-273199
	
	I1028 12:02:04.266533  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.269395  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.269827  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.269857  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.269989  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.270232  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.270391  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.270485  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.270679  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.270856  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.270870  100870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-273199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-273199/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-273199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:02:04.384725  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:04.384761  100870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:02:04.384787  100870 buildroot.go:174] setting up certificates
	I1028 12:02:04.384804  100870 provision.go:84] configureAuth start
	I1028 12:02:04.384821  100870 main.go:141] libmachine: (ha-273199) Calling .GetMachineName
	I1028 12:02:04.385115  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:02:04.387947  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.388377  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.388399  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.388568  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.390735  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.391127  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.391153  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.391272  100870 provision.go:143] copyHostCerts
	I1028 12:02:04.391308  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:02:04.391379  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:02:04.391398  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:02:04.391491  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:02:04.391662  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:02:04.391708  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:02:04.391720  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:02:04.391772  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:02:04.391867  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:02:04.391895  100870 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:02:04.391904  100870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:02:04.391944  100870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:02:04.392036  100870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.ha-273199 san=[127.0.0.1 192.168.39.208 ha-273199 localhost minikube]
	I1028 12:02:04.529305  100870 provision.go:177] copyRemoteCerts
	I1028 12:02:04.529387  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:02:04.529422  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.532448  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.532918  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.532950  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.533121  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.533304  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.533465  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.533578  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:02:04.618283  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 12:02:04.618357  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:02:04.641492  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 12:02:04.641570  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 12:02:04.663242  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 12:02:04.663298  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:02:04.684746  100870 provision.go:87] duration metric: took 299.924974ms to configureAuth
	I1028 12:02:04.684773  100870 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:02:04.685029  100870 config.go:182] Loaded profile config "ha-273199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:04.685101  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:02:04.688209  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.688605  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:02:04.688626  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:02:04.688845  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:02:04.689025  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.689183  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:02:04.689309  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:02:04.689441  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:04.689623  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:02:04.689638  100870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:03:35.537639  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:03:35.537672  100870 machine.go:96] duration metric: took 1m31.52358774s to provisionDockerMachine
	I1028 12:03:35.537686  100870 start.go:293] postStartSetup for "ha-273199" (driver="kvm2")
	I1028 12:03:35.537697  100870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:03:35.537715  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.538113  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:03:35.538149  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.541355  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.541809  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.541836  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.541968  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.542173  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.542327  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.542439  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.631492  100870 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:03:35.635456  100870 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:03:35.635475  100870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:03:35.635535  100870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:03:35.635616  100870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:03:35.635654  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 12:03:35.635754  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:03:35.645603  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:03:35.668954  100870 start.go:296] duration metric: took 131.25574ms for postStartSetup
	I1028 12:03:35.668999  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.669284  100870 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 12:03:35.669316  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.671833  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.672295  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.672314  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.672477  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.672652  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.672793  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.672921  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	W1028 12:03:35.758129  100870 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 12:03:35.758151  100870 fix.go:56] duration metric: took 1m31.765453008s for fixHost
	I1028 12:03:35.758176  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.760659  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.760992  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.761017  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.761154  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.761355  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.761537  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.761681  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.761829  100870 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:35.762007  100870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:03:35.762017  100870 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:03:35.875795  100870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117015.848180491
	
	I1028 12:03:35.875816  100870 fix.go:216] guest clock: 1730117015.848180491
	I1028 12:03:35.875822  100870 fix.go:229] Guest: 2024-10-28 12:03:35.848180491 +0000 UTC Remote: 2024-10-28 12:03:35.758160266 +0000 UTC m=+91.893889761 (delta=90.020225ms)
	I1028 12:03:35.875842  100870 fix.go:200] guest clock delta is within tolerance: 90.020225ms
	I1028 12:03:35.875848  100870 start.go:83] releasing machines lock for "ha-273199", held for 1m31.883167163s
	I1028 12:03:35.875878  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.876092  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:03:35.878554  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.878906  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.878936  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.879054  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879516  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879708  100870 main.go:141] libmachine: (ha-273199) Calling .DriverName
	I1028 12:03:35.879808  100870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:03:35.879847  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.879914  100870 ssh_runner.go:195] Run: cat /version.json
	I1028 12:03:35.879938  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHHostname
	I1028 12:03:35.882467  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882609  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882837  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.882867  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.882987  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.883130  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.883149  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:35.883180  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:35.883308  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHPort
	I1028 12:03:35.883317  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.883480  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHKeyPath
	I1028 12:03:35.883496  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.883609  100870 main.go:141] libmachine: (ha-273199) Calling .GetSSHUsername
	I1028 12:03:35.883757  100870 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/ha-273199/id_rsa Username:docker}
	I1028 12:03:35.964016  100870 ssh_runner.go:195] Run: systemctl --version
	I1028 12:03:35.983445  100870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:03:36.146255  100870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:03:36.151549  100870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:03:36.151612  100870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:03:36.160010  100870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:03:36.160033  100870 start.go:495] detecting cgroup driver to use...
	I1028 12:03:36.160120  100870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:03:36.175719  100870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:03:36.189003  100870 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:03:36.189044  100870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:03:36.201055  100870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:03:36.212922  100870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:03:36.363853  100870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:03:36.497270  100870 docker.go:233] disabling docker service ...
	I1028 12:03:36.497342  100870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:03:36.512883  100870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:03:36.525570  100870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:03:36.665771  100870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:03:36.807915  100870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:03:36.821298  100870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:03:36.838231  100870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:03:36.838314  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.847790  100870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:03:36.847878  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.857561  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.867064  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.876683  100870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:03:36.887009  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.896857  100870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.907254  100870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:36.916861  100870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:03:36.925444  100870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:03:36.933997  100870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:37.066208  100870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:03:38.470285  100870 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.404034896s)
	I1028 12:03:38.470321  100870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:03:38.470380  100870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:03:38.475330  100870 start.go:563] Will wait 60s for crictl version
	I1028 12:03:38.475395  100870 ssh_runner.go:195] Run: which crictl
	I1028 12:03:38.478798  100870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:03:38.519102  100870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:03:38.519200  100870 ssh_runner.go:195] Run: crio --version
	I1028 12:03:38.546233  100870 ssh_runner.go:195] Run: crio --version
	I1028 12:03:38.575873  100870 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:03:38.577051  100870 main.go:141] libmachine: (ha-273199) Calling .GetIP
	I1028 12:03:38.579762  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:38.580128  100870 main.go:141] libmachine: (ha-273199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:d4:52", ip: ""} in network mk-ha-273199: {Iface:virbr1 ExpiryTime:2024-10-28 12:53:12 +0000 UTC Type:0 Mac:52:54:00:22:d4:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-273199 Clientid:01:52:54:00:22:d4:52}
	I1028 12:03:38.580156  100870 main.go:141] libmachine: (ha-273199) DBG | domain ha-273199 has defined IP address 192.168.39.208 and MAC address 52:54:00:22:d4:52 in network mk-ha-273199
	I1028 12:03:38.580449  100870 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:03:38.584643  100870 kubeadm.go:883] updating cluster {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:03:38.584776  100870 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:03:38.584817  100870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:38.627991  100870 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:03:38.628025  100870 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:03:38.628083  100870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:38.661185  100870 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:03:38.661210  100870 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:03:38.661222  100870 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1028 12:03:38.661326  100870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-273199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:03:38.661393  100870 ssh_runner.go:195] Run: crio config
	I1028 12:03:38.717907  100870 cni.go:84] Creating CNI manager for ""
	I1028 12:03:38.717932  100870 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 12:03:38.717945  100870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:03:38.717974  100870 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-273199 NodeName:ha-273199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:03:38.718122  100870 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-273199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:03:38.718147  100870 kube-vip.go:115] generating kube-vip config ...
	I1028 12:03:38.718200  100870 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 12:03:38.729484  100870 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 12:03:38.729594  100870 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 12:03:38.729653  100870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:03:38.738275  100870 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:03:38.738330  100870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 12:03:38.746621  100870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 12:03:38.761336  100870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:03:38.775885  100870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 12:03:38.791466  100870 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 12:03:38.806550  100870 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 12:03:38.810171  100870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:38.942901  100870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:03:38.956371  100870 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199 for IP: 192.168.39.208
	I1028 12:03:38.956397  100870 certs.go:194] generating shared ca certs ...
	I1028 12:03:38.956413  100870 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:38.956556  100870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:03:38.956599  100870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:03:38.956609  100870 certs.go:256] generating profile certs ...
	I1028 12:03:38.956691  100870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/client.key
	I1028 12:03:38.956717  100870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e
	I1028 12:03:38.956735  100870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.225 192.168.39.14 192.168.39.254]
	I1028 12:03:39.052446  100870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e ...
	I1028 12:03:39.052475  100870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e: {Name:mkf7d063e306797dc9e6e5ad6dc9bcb3b72bf806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:39.052639  100870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e ...
	I1028 12:03:39.052651  100870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e: {Name:mk4019b5234c52eba7352f0210ac3f3e5c064235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:39.052717  100870 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt.563e2e4e -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt
	I1028 12:03:39.052856  100870 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key.563e2e4e -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key
	I1028 12:03:39.052983  100870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key
	I1028 12:03:39.053000  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:03:39.053014  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:03:39.053027  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:03:39.053039  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:03:39.053050  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:03:39.053062  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:03:39.053074  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:03:39.053086  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:03:39.053138  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:03:39.053175  100870 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:03:39.053184  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:03:39.053205  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:03:39.053226  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:03:39.053249  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:03:39.053288  100870 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:03:39.053313  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.053342  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.053368  100870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.053949  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:03:39.076687  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:03:39.097250  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:03:39.118090  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:03:39.139412  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:03:39.160112  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:03:39.181032  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:03:39.201432  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/ha-273199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:03:39.222134  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:03:39.243775  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:03:39.264965  100870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:03:39.285580  100870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:03:39.300958  100870 ssh_runner.go:195] Run: openssl version
	I1028 12:03:39.306254  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:03:39.315512  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.319526  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.319571  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:03:39.324647  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:03:39.332921  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:03:39.342691  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.346507  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.346543  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:39.351391  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:03:39.359451  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:03:39.368795  100870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.372599  100870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.372640  100870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:03:39.377688  100870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:03:39.385863  100870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:03:39.389982  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:03:39.395020  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:03:39.399968  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:03:39.404924  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:03:39.410342  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:03:39.415548  100870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:03:39.420736  100870 kubeadm.go:392] StartCluster: {Name:ha-273199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-273199 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.29 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:03:39.420842  100870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:03:39.420914  100870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:03:39.456803  100870 cri.go:89] found id: "0616197f0731a6c7aecad2af81db7abe3663092ac2cc43248fae06a2b1dbc5bd"
	I1028 12:03:39.456823  100870 cri.go:89] found id: "c7b36f903237978f5a1eda292661332688e1877341a87673b1ec023014dc4c7f"
	I1028 12:03:39.456827  100870 cri.go:89] found id: "974fb0419fbfb90af409e1c6b810ef3505544800969adc0bf73405ee06b57c1c"
	I1028 12:03:39.456830  100870 cri.go:89] found id: "fe58f2eaad87a3e4ca463339b6bdc855488a4708b9af049606ed9263e8e3c3ce"
	I1028 12:03:39.456833  100870 cri.go:89] found id: "74749e36327760b660e76850cdd18acee66a25699a1565fd0a1c62b07c64da6d"
	I1028 12:03:39.456836  100870 cri.go:89] found id: "72c80fedf66439aca8395ae10d9a2b669d916d6f0bd9ae621899fa27cdaf02c3"
	I1028 12:03:39.456839  100870 cri.go:89] found id: "e082051f544c2f58c0836531b76446e3898029ae17fefdbfa255bbbd926449a9"
	I1028 12:03:39.456841  100870 cri.go:89] found id: "82471ae5ddf92bc9f27e4f2d1643b24168e0f4093b368ee655ccfac791641e5a"
	I1028 12:03:39.456844  100870 cri.go:89] found id: "39409b2e850129d141d50d88302c758bd2957fb9d17df093040fc143d949c447"
	I1028 12:03:39.456848  100870 cri.go:89] found id: "8b350f0da3b167090f4962b426f2cdb72c5e67efd4c086bc5d6cc4eab1377a56"
	I1028 12:03:39.456851  100870 cri.go:89] found id: "07773cb979d8fe67dd6a5c0d085494b2c16f8ba096d905939c5e2f8c61fee5df"
	I1028 12:03:39.456854  100870 cri.go:89] found id: "6fb4822a5b7911d46388aa08e81918660264ddda93001795899d5f8524a5ec4c"
	I1028 12:03:39.456869  100870 cri.go:89] found id: "ec2df51593c58ac828d4854955c86294500f87c498e29577626b166b1b3f72f3"
	I1028 12:03:39.456875  100870 cri.go:89] found id: ""
	I1028 12:03:39.456910  100870 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-273199 -n ha-273199
helpers_test.go:261: (dbg) Run:  kubectl --context ha-273199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-363277
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-363277
E1028 12:24:20.377395   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-363277: exit status 82 (2m1.757548427s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-363277-m03"  ...
	* Stopping node "multinode-363277-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-363277" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-363277 --wait=true -v=8 --alsologtostderr
E1028 12:27:13.449104   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-363277 --wait=true -v=8 --alsologtostderr: (3m23.086283783s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-363277
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-363277 -n multinode-363277
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 logs -n 25: (1.901752538s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277:/home/docker/cp-test_multinode-363277-m02_multinode-363277.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277 sudo cat                                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m02_multinode-363277.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03:/home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277-m03 sudo cat                                   | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp testdata/cp-test.txt                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277:/home/docker/cp-test_multinode-363277-m03_multinode-363277.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277 sudo cat                                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m03_multinode-363277.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02:/home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277-m02 sudo cat                                   | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-363277 node stop m03                                                          | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	| node    | multinode-363277 node start                                                             | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-363277                                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:23 UTC |                     |
	| stop    | -p multinode-363277                                                                     | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:23 UTC |                     |
	| start   | -p multinode-363277                                                                     | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:25 UTC | 28 Oct 24 12:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-363277                                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:25:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:25:15.370368  113146 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:25:15.370607  113146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:15.370615  113146 out.go:358] Setting ErrFile to fd 2...
	I1028 12:25:15.370619  113146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:15.370769  113146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:25:15.371278  113146 out.go:352] Setting JSON to false
	I1028 12:25:15.372181  113146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7665,"bootTime":1730110650,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:25:15.372297  113146 start.go:139] virtualization: kvm guest
	I1028 12:25:15.374466  113146 out.go:177] * [multinode-363277] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:25:15.375848  113146 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:25:15.375922  113146 notify.go:220] Checking for updates...
	I1028 12:25:15.378475  113146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:25:15.379805  113146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:25:15.381150  113146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:25:15.382257  113146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:25:15.383375  113146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:25:15.385045  113146 config.go:182] Loaded profile config "multinode-363277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:25:15.385136  113146 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:25:15.385572  113146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:25:15.385639  113146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:25:15.400577  113146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38391
	I1028 12:25:15.401049  113146 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:25:15.401685  113146 main.go:141] libmachine: Using API Version  1
	I1028 12:25:15.401718  113146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:25:15.402055  113146 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:25:15.402243  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.436369  113146 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:25:15.437480  113146 start.go:297] selected driver: kvm2
	I1028 12:25:15.437492  113146 start.go:901] validating driver "kvm2" against &{Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:25:15.437634  113146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:25:15.437966  113146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:25:15.438044  113146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:25:15.452188  113146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:25:15.452821  113146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:25:15.452850  113146 cni.go:84] Creating CNI manager for ""
	I1028 12:25:15.452905  113146 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 12:25:15.452954  113146 start.go:340] cluster config:
	{Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:25:15.453085  113146 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:25:15.455367  113146 out.go:177] * Starting "multinode-363277" primary control-plane node in "multinode-363277" cluster
	I1028 12:25:15.456479  113146 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:25:15.456524  113146 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:25:15.456535  113146 cache.go:56] Caching tarball of preloaded images
	I1028 12:25:15.456613  113146 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:25:15.456624  113146 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:25:15.456731  113146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/config.json ...
	I1028 12:25:15.456913  113146 start.go:360] acquireMachinesLock for multinode-363277: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:25:15.456953  113146 start.go:364] duration metric: took 23.29µs to acquireMachinesLock for "multinode-363277"
	I1028 12:25:15.456967  113146 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:25:15.456975  113146 fix.go:54] fixHost starting: 
	I1028 12:25:15.457219  113146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:25:15.457249  113146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:25:15.470744  113146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1028 12:25:15.471181  113146 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:25:15.471675  113146 main.go:141] libmachine: Using API Version  1
	I1028 12:25:15.471693  113146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:25:15.472055  113146 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:25:15.472251  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.472376  113146 main.go:141] libmachine: (multinode-363277) Calling .GetState
	I1028 12:25:15.473815  113146 fix.go:112] recreateIfNeeded on multinode-363277: state=Running err=<nil>
	W1028 12:25:15.473836  113146 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:25:15.476507  113146 out.go:177] * Updating the running kvm2 "multinode-363277" VM ...
	I1028 12:25:15.477923  113146 machine.go:93] provisionDockerMachine start ...
	I1028 12:25:15.477941  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.478122  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.480515  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.480951  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.480987  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.481132  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.481290  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.481429  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.481556  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.481678  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.481932  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.481947  113146 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:25:15.584338  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-363277
	
	I1028 12:25:15.584372  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.584680  113146 buildroot.go:166] provisioning hostname "multinode-363277"
	I1028 12:25:15.584711  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.584898  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.587623  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.588014  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.588042  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.588158  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.588322  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.588484  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.588631  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.588777  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.588941  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.588953  113146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-363277 && echo "multinode-363277" | sudo tee /etc/hostname
	I1028 12:25:15.701746  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-363277
	
	I1028 12:25:15.701770  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.704417  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.704828  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.704860  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.705017  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.705198  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.705370  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.705502  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.705630  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.705799  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.705821  113146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-363277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-363277/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-363277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:25:15.803978  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:25:15.804006  113146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:25:15.804043  113146 buildroot.go:174] setting up certificates
	I1028 12:25:15.804058  113146 provision.go:84] configureAuth start
	I1028 12:25:15.804073  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.804365  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:25:15.807139  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.807507  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.807551  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.807670  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.809821  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.810140  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.810163  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.810299  113146 provision.go:143] copyHostCerts
	I1028 12:25:15.810331  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:25:15.810382  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:25:15.810397  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:25:15.810463  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:25:15.810560  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:25:15.810582  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:25:15.810587  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:25:15.810613  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:25:15.810671  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:25:15.810686  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:25:15.810692  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:25:15.810713  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:25:15.810776  113146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.multinode-363277 san=[127.0.0.1 192.168.39.174 localhost minikube multinode-363277]
	I1028 12:25:15.883401  113146 provision.go:177] copyRemoteCerts
	I1028 12:25:15.883464  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:25:15.883490  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.886015  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.886337  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.886364  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.886495  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.886692  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.886854  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.886987  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:25:15.965142  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 12:25:15.965210  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:25:15.988401  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 12:25:15.988464  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:25:16.009689  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 12:25:16.009749  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:25:16.030864  113146 provision.go:87] duration metric: took 226.793505ms to configureAuth
	I1028 12:25:16.030892  113146 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:25:16.031098  113146 config.go:182] Loaded profile config "multinode-363277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:25:16.031172  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:16.033704  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:16.034077  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:16.034111  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:16.034276  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:16.034430  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:16.034601  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:16.034771  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:16.034927  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:16.035150  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:16.035168  113146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:26:46.629890  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:26:46.629940  113146 machine.go:96] duration metric: took 1m31.152002375s to provisionDockerMachine
	I1028 12:26:46.629964  113146 start.go:293] postStartSetup for "multinode-363277" (driver="kvm2")
	I1028 12:26:46.629980  113146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:26:46.630008  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.630331  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:26:46.630383  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.633471  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.633910  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.633939  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.634205  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.634410  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.634635  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.634786  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.714803  113146 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:26:46.718535  113146 command_runner.go:130] > NAME=Buildroot
	I1028 12:26:46.718558  113146 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:26:46.718564  113146 command_runner.go:130] > ID=buildroot
	I1028 12:26:46.718569  113146 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:26:46.718576  113146 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:26:46.718637  113146 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:26:46.718650  113146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:26:46.718703  113146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:26:46.718784  113146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:26:46.718798  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 12:26:46.718895  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:26:46.728124  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:26:46.750168  113146 start.go:296] duration metric: took 120.189513ms for postStartSetup
	I1028 12:26:46.750210  113146 fix.go:56] duration metric: took 1m31.29323411s for fixHost
	I1028 12:26:46.750241  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.753198  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.753692  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.753721  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.753866  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.754062  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.754239  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.754392  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.754580  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:46.754786  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:26:46.754800  113146 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:26:46.852227  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118406.822489579
	
	I1028 12:26:46.852250  113146 fix.go:216] guest clock: 1730118406.822489579
	I1028 12:26:46.852259  113146 fix.go:229] Guest: 2024-10-28 12:26:46.822489579 +0000 UTC Remote: 2024-10-28 12:26:46.750215468 +0000 UTC m=+91.418749930 (delta=72.274111ms)
	I1028 12:26:46.852286  113146 fix.go:200] guest clock delta is within tolerance: 72.274111ms
	I1028 12:26:46.852293  113146 start.go:83] releasing machines lock for "multinode-363277", held for 1m31.395330787s
	I1028 12:26:46.852333  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.852620  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:26:46.855384  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.855815  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.855848  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.855972  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856438  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856623  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856740  113146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:26:46.856801  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.856820  113146 ssh_runner.go:195] Run: cat /version.json
	I1028 12:26:46.856860  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.859523  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859594  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859916  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.859946  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859973  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.860038  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.860054  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.860217  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.860231  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.860380  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.860454  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.860507  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.860571  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.860616  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.931508  113146 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:26:46.931716  113146 ssh_runner.go:195] Run: systemctl --version
	I1028 12:26:46.960787  113146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 12:26:46.961386  113146 command_runner.go:130] > systemd 252 (252)
	I1028 12:26:46.961428  113146 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:26:46.961479  113146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:26:47.116759  113146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:26:47.122101  113146 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:26:47.122171  113146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:26:47.122254  113146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:26:47.130807  113146 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:26:47.130830  113146 start.go:495] detecting cgroup driver to use...
	I1028 12:26:47.130892  113146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:26:47.146990  113146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:26:47.160122  113146 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:26:47.160181  113146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:26:47.172032  113146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:26:47.183741  113146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:26:47.320690  113146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:26:47.464238  113146 docker.go:233] disabling docker service ...
	I1028 12:26:47.464310  113146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:26:47.479746  113146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:26:47.492971  113146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:26:47.630739  113146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:26:47.768326  113146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:26:47.781151  113146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:26:47.798101  113146 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 12:26:47.798141  113146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:26:47.798184  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.808732  113146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:26:47.808791  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.818387  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.827879  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.837125  113146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:26:47.846684  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.856098  113146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.865551  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.874789  113146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:26:47.883132  113146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 12:26:47.883201  113146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:26:47.891960  113146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:26:48.025332  113146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:26:54.029894  113146 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.004523549s)
	I1028 12:26:54.029923  113146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:26:54.029968  113146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:26:54.036371  113146 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 12:26:54.036394  113146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 12:26:54.036401  113146 command_runner.go:130] > Device: 0,22	Inode: 1270        Links: 1
	I1028 12:26:54.036408  113146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:26:54.036413  113146 command_runner.go:130] > Access: 2024-10-28 12:26:53.928012448 +0000
	I1028 12:26:54.036418  113146 command_runner.go:130] > Modify: 2024-10-28 12:26:53.895011771 +0000
	I1028 12:26:54.036423  113146 command_runner.go:130] > Change: 2024-10-28 12:26:53.895011771 +0000
	I1028 12:26:54.036430  113146 command_runner.go:130] >  Birth: -
	I1028 12:26:54.036592  113146 start.go:563] Will wait 60s for crictl version
	I1028 12:26:54.036651  113146 ssh_runner.go:195] Run: which crictl
	I1028 12:26:54.040127  113146 command_runner.go:130] > /usr/bin/crictl
	I1028 12:26:54.040260  113146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:26:54.074039  113146 command_runner.go:130] > Version:  0.1.0
	I1028 12:26:54.074061  113146 command_runner.go:130] > RuntimeName:  cri-o
	I1028 12:26:54.074067  113146 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 12:26:54.074072  113146 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 12:26:54.075283  113146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:26:54.075359  113146 ssh_runner.go:195] Run: crio --version
	I1028 12:26:54.103472  113146 command_runner.go:130] > crio version 1.29.1
	I1028 12:26:54.103493  113146 command_runner.go:130] > Version:        1.29.1
	I1028 12:26:54.103499  113146 command_runner.go:130] > GitCommit:      unknown
	I1028 12:26:54.103503  113146 command_runner.go:130] > GitCommitDate:  unknown
	I1028 12:26:54.103523  113146 command_runner.go:130] > GitTreeState:   clean
	I1028 12:26:54.103530  113146 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 12:26:54.103535  113146 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 12:26:54.103539  113146 command_runner.go:130] > Compiler:       gc
	I1028 12:26:54.103545  113146 command_runner.go:130] > Platform:       linux/amd64
	I1028 12:26:54.103553  113146 command_runner.go:130] > Linkmode:       dynamic
	I1028 12:26:54.103562  113146 command_runner.go:130] > BuildTags:      
	I1028 12:26:54.103569  113146 command_runner.go:130] >   containers_image_ostree_stub
	I1028 12:26:54.103573  113146 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 12:26:54.103577  113146 command_runner.go:130] >   btrfs_noversion
	I1028 12:26:54.103581  113146 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 12:26:54.103586  113146 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 12:26:54.103590  113146 command_runner.go:130] >   seccomp
	I1028 12:26:54.103597  113146 command_runner.go:130] > LDFlags:          unknown
	I1028 12:26:54.103601  113146 command_runner.go:130] > SeccompEnabled:   true
	I1028 12:26:54.103605  113146 command_runner.go:130] > AppArmorEnabled:  false
	I1028 12:26:54.104655  113146 ssh_runner.go:195] Run: crio --version
	I1028 12:26:54.130593  113146 command_runner.go:130] > crio version 1.29.1
	I1028 12:26:54.130617  113146 command_runner.go:130] > Version:        1.29.1
	I1028 12:26:54.130625  113146 command_runner.go:130] > GitCommit:      unknown
	I1028 12:26:54.130632  113146 command_runner.go:130] > GitCommitDate:  unknown
	I1028 12:26:54.130638  113146 command_runner.go:130] > GitTreeState:   clean
	I1028 12:26:54.130646  113146 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 12:26:54.130652  113146 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 12:26:54.130658  113146 command_runner.go:130] > Compiler:       gc
	I1028 12:26:54.130664  113146 command_runner.go:130] > Platform:       linux/amd64
	I1028 12:26:54.130671  113146 command_runner.go:130] > Linkmode:       dynamic
	I1028 12:26:54.130677  113146 command_runner.go:130] > BuildTags:      
	I1028 12:26:54.130685  113146 command_runner.go:130] >   containers_image_ostree_stub
	I1028 12:26:54.130693  113146 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 12:26:54.130703  113146 command_runner.go:130] >   btrfs_noversion
	I1028 12:26:54.130710  113146 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 12:26:54.130718  113146 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 12:26:54.130746  113146 command_runner.go:130] >   seccomp
	I1028 12:26:54.130756  113146 command_runner.go:130] > LDFlags:          unknown
	I1028 12:26:54.130763  113146 command_runner.go:130] > SeccompEnabled:   true
	I1028 12:26:54.130770  113146 command_runner.go:130] > AppArmorEnabled:  false
	I1028 12:26:54.133656  113146 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:26:54.135136  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:26:54.138093  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:54.138450  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:54.138468  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:54.138689  113146 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:26:54.142520  113146 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 12:26:54.142608  113146 kubeadm.go:883] updating cluster {Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:26:54.142758  113146 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:26:54.142799  113146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:26:54.178788  113146 command_runner.go:130] > {
	I1028 12:26:54.178813  113146 command_runner.go:130] >   "images": [
	I1028 12:26:54.178818  113146 command_runner.go:130] >     {
	I1028 12:26:54.178828  113146 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 12:26:54.178833  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.178847  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 12:26:54.178853  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178857  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.178866  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 12:26:54.178873  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 12:26:54.178879  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178884  113146 command_runner.go:130] >       "size": "94965812",
	I1028 12:26:54.178888  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.178892  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.178900  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.178905  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.178908  113146 command_runner.go:130] >     },
	I1028 12:26:54.178913  113146 command_runner.go:130] >     {
	I1028 12:26:54.178919  113146 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 12:26:54.178924  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.178929  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 12:26:54.178934  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178938  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.178945  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 12:26:54.178952  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 12:26:54.178959  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178963  113146 command_runner.go:130] >       "size": "1363676",
	I1028 12:26:54.178967  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.178981  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.178987  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.178991  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.178994  113146 command_runner.go:130] >     },
	I1028 12:26:54.178998  113146 command_runner.go:130] >     {
	I1028 12:26:54.179004  113146 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 12:26:54.179010  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179015  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 12:26:54.179019  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179025  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179033  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 12:26:54.179044  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 12:26:54.179051  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179055  113146 command_runner.go:130] >       "size": "31470524",
	I1028 12:26:54.179059  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179065  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179069  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179076  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179079  113146 command_runner.go:130] >     },
	I1028 12:26:54.179083  113146 command_runner.go:130] >     {
	I1028 12:26:54.179089  113146 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 12:26:54.179096  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179103  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 12:26:54.179107  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179113  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179120  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 12:26:54.179135  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 12:26:54.179141  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179145  113146 command_runner.go:130] >       "size": "63273227",
	I1028 12:26:54.179151  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179156  113146 command_runner.go:130] >       "username": "nonroot",
	I1028 12:26:54.179161  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179166  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179181  113146 command_runner.go:130] >     },
	I1028 12:26:54.179187  113146 command_runner.go:130] >     {
	I1028 12:26:54.179193  113146 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 12:26:54.179200  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179204  113146 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 12:26:54.179210  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179214  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179221  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 12:26:54.179230  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 12:26:54.179234  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179238  113146 command_runner.go:130] >       "size": "149009664",
	I1028 12:26:54.179242  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179246  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179249  113146 command_runner.go:130] >       },
	I1028 12:26:54.179253  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179257  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179261  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179264  113146 command_runner.go:130] >     },
	I1028 12:26:54.179268  113146 command_runner.go:130] >     {
	I1028 12:26:54.179274  113146 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 12:26:54.179278  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179283  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 12:26:54.179287  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179291  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179301  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 12:26:54.179308  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 12:26:54.179314  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179318  113146 command_runner.go:130] >       "size": "95274464",
	I1028 12:26:54.179321  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179325  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179329  113146 command_runner.go:130] >       },
	I1028 12:26:54.179333  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179337  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179346  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179352  113146 command_runner.go:130] >     },
	I1028 12:26:54.179355  113146 command_runner.go:130] >     {
	I1028 12:26:54.179361  113146 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 12:26:54.179368  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179373  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 12:26:54.179378  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179382  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179389  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 12:26:54.179399  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 12:26:54.179404  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179408  113146 command_runner.go:130] >       "size": "89474374",
	I1028 12:26:54.179412  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179416  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179421  113146 command_runner.go:130] >       },
	I1028 12:26:54.179425  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179429  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179434  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179438  113146 command_runner.go:130] >     },
	I1028 12:26:54.179441  113146 command_runner.go:130] >     {
	I1028 12:26:54.179447  113146 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 12:26:54.179452  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179457  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 12:26:54.179462  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179466  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179486  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 12:26:54.179496  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 12:26:54.179500  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179504  113146 command_runner.go:130] >       "size": "92783513",
	I1028 12:26:54.179508  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179512  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179515  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179518  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179531  113146 command_runner.go:130] >     },
	I1028 12:26:54.179535  113146 command_runner.go:130] >     {
	I1028 12:26:54.179540  113146 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 12:26:54.179544  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179548  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 12:26:54.179551  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179556  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179567  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 12:26:54.179574  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 12:26:54.179577  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179581  113146 command_runner.go:130] >       "size": "68457798",
	I1028 12:26:54.179585  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179589  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179592  113146 command_runner.go:130] >       },
	I1028 12:26:54.179596  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179599  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179603  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179606  113146 command_runner.go:130] >     },
	I1028 12:26:54.179610  113146 command_runner.go:130] >     {
	I1028 12:26:54.179616  113146 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 12:26:54.179622  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179636  113146 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 12:26:54.179641  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179645  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179651  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 12:26:54.179658  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 12:26:54.179661  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179665  113146 command_runner.go:130] >       "size": "742080",
	I1028 12:26:54.179669  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179673  113146 command_runner.go:130] >         "value": "65535"
	I1028 12:26:54.179677  113146 command_runner.go:130] >       },
	I1028 12:26:54.179681  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179685  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179694  113146 command_runner.go:130] >       "pinned": true
	I1028 12:26:54.179700  113146 command_runner.go:130] >     }
	I1028 12:26:54.179704  113146 command_runner.go:130] >   ]
	I1028 12:26:54.179708  113146 command_runner.go:130] > }
	I1028 12:26:54.180230  113146 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:26:54.180251  113146 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:26:54.180301  113146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:26:54.210635  113146 command_runner.go:130] > {
	I1028 12:26:54.210669  113146 command_runner.go:130] >   "images": [
	I1028 12:26:54.210675  113146 command_runner.go:130] >     {
	I1028 12:26:54.210684  113146 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 12:26:54.210689  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210695  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 12:26:54.210699  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210703  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210711  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 12:26:54.210718  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 12:26:54.210722  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210726  113146 command_runner.go:130] >       "size": "94965812",
	I1028 12:26:54.210730  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210737  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210743  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210749  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210755  113146 command_runner.go:130] >     },
	I1028 12:26:54.210760  113146 command_runner.go:130] >     {
	I1028 12:26:54.210770  113146 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 12:26:54.210780  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210787  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 12:26:54.210793  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210799  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210835  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 12:26:54.210847  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 12:26:54.210851  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210855  113146 command_runner.go:130] >       "size": "1363676",
	I1028 12:26:54.210859  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210866  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210870  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210874  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210883  113146 command_runner.go:130] >     },
	I1028 12:26:54.210887  113146 command_runner.go:130] >     {
	I1028 12:26:54.210893  113146 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 12:26:54.210897  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210902  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 12:26:54.210905  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210909  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210918  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 12:26:54.210928  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 12:26:54.210931  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210935  113146 command_runner.go:130] >       "size": "31470524",
	I1028 12:26:54.210939  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210943  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210949  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210953  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210956  113146 command_runner.go:130] >     },
	I1028 12:26:54.210960  113146 command_runner.go:130] >     {
	I1028 12:26:54.210965  113146 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 12:26:54.210971  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210976  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 12:26:54.210979  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210983  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210990  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 12:26:54.211004  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 12:26:54.211010  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211021  113146 command_runner.go:130] >       "size": "63273227",
	I1028 12:26:54.211028  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.211032  113146 command_runner.go:130] >       "username": "nonroot",
	I1028 12:26:54.211036  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211040  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211043  113146 command_runner.go:130] >     },
	I1028 12:26:54.211047  113146 command_runner.go:130] >     {
	I1028 12:26:54.211055  113146 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 12:26:54.211059  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211081  113146 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 12:26:54.211094  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211103  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211113  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 12:26:54.211119  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 12:26:54.211125  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211129  113146 command_runner.go:130] >       "size": "149009664",
	I1028 12:26:54.211133  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211137  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211141  113146 command_runner.go:130] >       },
	I1028 12:26:54.211145  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211149  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211153  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211157  113146 command_runner.go:130] >     },
	I1028 12:26:54.211160  113146 command_runner.go:130] >     {
	I1028 12:26:54.211166  113146 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 12:26:54.211172  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211177  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 12:26:54.211181  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211185  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211193  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 12:26:54.211200  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 12:26:54.211206  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211210  113146 command_runner.go:130] >       "size": "95274464",
	I1028 12:26:54.211219  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211226  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211229  113146 command_runner.go:130] >       },
	I1028 12:26:54.211233  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211237  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211241  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211245  113146 command_runner.go:130] >     },
	I1028 12:26:54.211248  113146 command_runner.go:130] >     {
	I1028 12:26:54.211254  113146 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 12:26:54.211260  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211265  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 12:26:54.211271  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211275  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211282  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 12:26:54.211291  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 12:26:54.211295  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211301  113146 command_runner.go:130] >       "size": "89474374",
	I1028 12:26:54.211305  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211310  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211313  113146 command_runner.go:130] >       },
	I1028 12:26:54.211319  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211323  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211328  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211332  113146 command_runner.go:130] >     },
	I1028 12:26:54.211335  113146 command_runner.go:130] >     {
	I1028 12:26:54.211341  113146 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 12:26:54.211347  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211352  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 12:26:54.211358  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211361  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211380  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 12:26:54.211390  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 12:26:54.211394  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211402  113146 command_runner.go:130] >       "size": "92783513",
	I1028 12:26:54.211408  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.211412  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211416  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211420  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211423  113146 command_runner.go:130] >     },
	I1028 12:26:54.211427  113146 command_runner.go:130] >     {
	I1028 12:26:54.211432  113146 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 12:26:54.211438  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211443  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 12:26:54.211449  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211452  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211459  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 12:26:54.211467  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 12:26:54.211471  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211475  113146 command_runner.go:130] >       "size": "68457798",
	I1028 12:26:54.211478  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211482  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211486  113146 command_runner.go:130] >       },
	I1028 12:26:54.211490  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211494  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211498  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211501  113146 command_runner.go:130] >     },
	I1028 12:26:54.211505  113146 command_runner.go:130] >     {
	I1028 12:26:54.211511  113146 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 12:26:54.211516  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211521  113146 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 12:26:54.211525  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211529  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211536  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 12:26:54.211545  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 12:26:54.211548  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211552  113146 command_runner.go:130] >       "size": "742080",
	I1028 12:26:54.211561  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211567  113146 command_runner.go:130] >         "value": "65535"
	I1028 12:26:54.211571  113146 command_runner.go:130] >       },
	I1028 12:26:54.211575  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211578  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211582  113146 command_runner.go:130] >       "pinned": true
	I1028 12:26:54.211586  113146 command_runner.go:130] >     }
	I1028 12:26:54.211589  113146 command_runner.go:130] >   ]
	I1028 12:26:54.211595  113146 command_runner.go:130] > }
	I1028 12:26:54.211735  113146 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:26:54.211747  113146 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:26:54.211755  113146 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.31.2 crio true true} ...
	I1028 12:26:54.211863  113146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-363277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:26:54.211935  113146 ssh_runner.go:195] Run: crio config
	I1028 12:26:54.257183  113146 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 12:26:54.257230  113146 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 12:26:54.257243  113146 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 12:26:54.257249  113146 command_runner.go:130] > #
	I1028 12:26:54.257257  113146 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 12:26:54.257264  113146 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 12:26:54.257273  113146 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 12:26:54.257288  113146 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 12:26:54.257301  113146 command_runner.go:130] > # reload'.
	I1028 12:26:54.257312  113146 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 12:26:54.257327  113146 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 12:26:54.257338  113146 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 12:26:54.257348  113146 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 12:26:54.257359  113146 command_runner.go:130] > [crio]
	I1028 12:26:54.257369  113146 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 12:26:54.257384  113146 command_runner.go:130] > # containers images, in this directory.
	I1028 12:26:54.257394  113146 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 12:26:54.257411  113146 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 12:26:54.257425  113146 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 12:26:54.257441  113146 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 12:26:54.257519  113146 command_runner.go:130] > # imagestore = ""
	I1028 12:26:54.257537  113146 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 12:26:54.257543  113146 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 12:26:54.257594  113146 command_runner.go:130] > storage_driver = "overlay"
	I1028 12:26:54.257611  113146 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 12:26:54.257625  113146 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 12:26:54.257635  113146 command_runner.go:130] > storage_option = [
	I1028 12:26:54.257728  113146 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 12:26:54.257751  113146 command_runner.go:130] > ]
	I1028 12:26:54.257768  113146 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 12:26:54.257782  113146 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 12:26:54.258037  113146 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 12:26:54.258050  113146 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 12:26:54.258056  113146 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 12:26:54.258060  113146 command_runner.go:130] > # always happen on a node reboot
	I1028 12:26:54.258246  113146 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 12:26:54.258275  113146 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 12:26:54.258285  113146 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 12:26:54.258290  113146 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 12:26:54.258365  113146 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 12:26:54.258384  113146 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 12:26:54.258396  113146 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 12:26:54.258614  113146 command_runner.go:130] > # internal_wipe = true
	I1028 12:26:54.258635  113146 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 12:26:54.258646  113146 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 12:26:54.258658  113146 command_runner.go:130] > # internal_repair = false
	I1028 12:26:54.258674  113146 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 12:26:54.258689  113146 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 12:26:54.258702  113146 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 12:26:54.258833  113146 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 12:26:54.258853  113146 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 12:26:54.258860  113146 command_runner.go:130] > [crio.api]
	I1028 12:26:54.258869  113146 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 12:26:54.259052  113146 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 12:26:54.259065  113146 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 12:26:54.259336  113146 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 12:26:54.259356  113146 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 12:26:54.259364  113146 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 12:26:54.259555  113146 command_runner.go:130] > # stream_port = "0"
	I1028 12:26:54.259567  113146 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 12:26:54.259785  113146 command_runner.go:130] > # stream_enable_tls = false
	I1028 12:26:54.259802  113146 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 12:26:54.260003  113146 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 12:26:54.260042  113146 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 12:26:54.260056  113146 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 12:26:54.260067  113146 command_runner.go:130] > # minutes.
	I1028 12:26:54.260243  113146 command_runner.go:130] > # stream_tls_cert = ""
	I1028 12:26:54.260261  113146 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 12:26:54.260271  113146 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 12:26:54.260396  113146 command_runner.go:130] > # stream_tls_key = ""
	I1028 12:26:54.260414  113146 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 12:26:54.260427  113146 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 12:26:54.260464  113146 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 12:26:54.260538  113146 command_runner.go:130] > # stream_tls_ca = ""
	I1028 12:26:54.260557  113146 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 12:26:54.260625  113146 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 12:26:54.260639  113146 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 12:26:54.260738  113146 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 12:26:54.260753  113146 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 12:26:54.260763  113146 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 12:26:54.260773  113146 command_runner.go:130] > [crio.runtime]
	I1028 12:26:54.260784  113146 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 12:26:54.260795  113146 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 12:26:54.260803  113146 command_runner.go:130] > # "nofile=1024:2048"
	I1028 12:26:54.260816  113146 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 12:26:54.260908  113146 command_runner.go:130] > # default_ulimits = [
	I1028 12:26:54.261001  113146 command_runner.go:130] > # ]
	I1028 12:26:54.261023  113146 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 12:26:54.261214  113146 command_runner.go:130] > # no_pivot = false
	I1028 12:26:54.261224  113146 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 12:26:54.261230  113146 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 12:26:54.261435  113146 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 12:26:54.261448  113146 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 12:26:54.261456  113146 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 12:26:54.261468  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 12:26:54.261556  113146 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 12:26:54.261565  113146 command_runner.go:130] > # Cgroup setting for conmon
	I1028 12:26:54.261577  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 12:26:54.261698  113146 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 12:26:54.261719  113146 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 12:26:54.261728  113146 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 12:26:54.261745  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 12:26:54.261754  113146 command_runner.go:130] > conmon_env = [
	I1028 12:26:54.261764  113146 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 12:26:54.261931  113146 command_runner.go:130] > ]
	I1028 12:26:54.261943  113146 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 12:26:54.261951  113146 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 12:26:54.261961  113146 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 12:26:54.261992  113146 command_runner.go:130] > # default_env = [
	I1028 12:26:54.262100  113146 command_runner.go:130] > # ]
	I1028 12:26:54.262113  113146 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 12:26:54.262125  113146 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 12:26:54.262339  113146 command_runner.go:130] > # selinux = false
	I1028 12:26:54.262355  113146 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 12:26:54.262366  113146 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 12:26:54.262378  113146 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 12:26:54.262494  113146 command_runner.go:130] > # seccomp_profile = ""
	I1028 12:26:54.262505  113146 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 12:26:54.262511  113146 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 12:26:54.262517  113146 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 12:26:54.262521  113146 command_runner.go:130] > # which might increase security.
	I1028 12:26:54.262526  113146 command_runner.go:130] > # This option is currently deprecated,
	I1028 12:26:54.262533  113146 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 12:26:54.262634  113146 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 12:26:54.262645  113146 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 12:26:54.262651  113146 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 12:26:54.262659  113146 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 12:26:54.262665  113146 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 12:26:54.262672  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.262893  113146 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 12:26:54.262908  113146 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 12:26:54.262915  113146 command_runner.go:130] > # the cgroup blockio controller.
	I1028 12:26:54.262949  113146 command_runner.go:130] > # blockio_config_file = ""
	I1028 12:26:54.262964  113146 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 12:26:54.262973  113146 command_runner.go:130] > # blockio parameters.
	I1028 12:26:54.263183  113146 command_runner.go:130] > # blockio_reload = false
	I1028 12:26:54.263199  113146 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 12:26:54.263206  113146 command_runner.go:130] > # irqbalance daemon.
	I1028 12:26:54.263404  113146 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 12:26:54.263419  113146 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 12:26:54.263441  113146 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 12:26:54.263456  113146 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 12:26:54.263662  113146 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 12:26:54.263684  113146 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 12:26:54.263694  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.264360  113146 command_runner.go:130] > # rdt_config_file = ""
	I1028 12:26:54.264373  113146 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 12:26:54.264378  113146 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 12:26:54.264414  113146 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 12:26:54.264425  113146 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 12:26:54.264434  113146 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 12:26:54.264443  113146 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 12:26:54.264452  113146 command_runner.go:130] > # will be added.
	I1028 12:26:54.264458  113146 command_runner.go:130] > # default_capabilities = [
	I1028 12:26:54.264465  113146 command_runner.go:130] > # 	"CHOWN",
	I1028 12:26:54.264471  113146 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 12:26:54.264477  113146 command_runner.go:130] > # 	"FSETID",
	I1028 12:26:54.264487  113146 command_runner.go:130] > # 	"FOWNER",
	I1028 12:26:54.264493  113146 command_runner.go:130] > # 	"SETGID",
	I1028 12:26:54.264502  113146 command_runner.go:130] > # 	"SETUID",
	I1028 12:26:54.264510  113146 command_runner.go:130] > # 	"SETPCAP",
	I1028 12:26:54.264514  113146 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 12:26:54.264520  113146 command_runner.go:130] > # 	"KILL",
	I1028 12:26:54.264524  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264532  113146 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 12:26:54.264539  113146 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 12:26:54.264546  113146 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 12:26:54.264558  113146 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 12:26:54.264570  113146 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 12:26:54.264578  113146 command_runner.go:130] > default_sysctls = [
	I1028 12:26:54.264586  113146 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 12:26:54.264594  113146 command_runner.go:130] > ]
	I1028 12:26:54.264602  113146 command_runner.go:130] > # List of devices on the host that a
	I1028 12:26:54.264615  113146 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 12:26:54.264624  113146 command_runner.go:130] > # allowed_devices = [
	I1028 12:26:54.264631  113146 command_runner.go:130] > # 	"/dev/fuse",
	I1028 12:26:54.264651  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264663  113146 command_runner.go:130] > # List of additional devices. specified as
	I1028 12:26:54.264686  113146 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 12:26:54.264697  113146 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 12:26:54.264706  113146 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 12:26:54.264715  113146 command_runner.go:130] > # additional_devices = [
	I1028 12:26:54.264721  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264732  113146 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 12:26:54.264739  113146 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 12:26:54.264748  113146 command_runner.go:130] > # 	"/etc/cdi",
	I1028 12:26:54.264755  113146 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 12:26:54.264763  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264773  113146 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 12:26:54.264788  113146 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 12:26:54.264797  113146 command_runner.go:130] > # Defaults to false.
	I1028 12:26:54.264801  113146 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 12:26:54.264813  113146 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 12:26:54.264825  113146 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 12:26:54.264831  113146 command_runner.go:130] > # hooks_dir = [
	I1028 12:26:54.264842  113146 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 12:26:54.264850  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264863  113146 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 12:26:54.264873  113146 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 12:26:54.264883  113146 command_runner.go:130] > # its default mounts from the following two files:
	I1028 12:26:54.264889  113146 command_runner.go:130] > #
	I1028 12:26:54.264900  113146 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 12:26:54.264912  113146 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 12:26:54.264923  113146 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 12:26:54.264932  113146 command_runner.go:130] > #
	I1028 12:26:54.264942  113146 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 12:26:54.264954  113146 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 12:26:54.264967  113146 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 12:26:54.264977  113146 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 12:26:54.264992  113146 command_runner.go:130] > #
	I1028 12:26:54.265003  113146 command_runner.go:130] > # default_mounts_file = ""
	I1028 12:26:54.265011  113146 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 12:26:54.265026  113146 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 12:26:54.265035  113146 command_runner.go:130] > pids_limit = 1024
	I1028 12:26:54.265045  113146 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 12:26:54.265057  113146 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 12:26:54.265069  113146 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 12:26:54.265084  113146 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 12:26:54.265093  113146 command_runner.go:130] > # log_size_max = -1
	I1028 12:26:54.265104  113146 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 12:26:54.265115  113146 command_runner.go:130] > # log_to_journald = false
	I1028 12:26:54.265124  113146 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 12:26:54.265135  113146 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 12:26:54.265146  113146 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 12:26:54.265157  113146 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 12:26:54.265167  113146 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 12:26:54.265176  113146 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 12:26:54.265187  113146 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 12:26:54.265194  113146 command_runner.go:130] > # read_only = false
	I1028 12:26:54.265206  113146 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 12:26:54.265218  113146 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 12:26:54.265228  113146 command_runner.go:130] > # live configuration reload.
	I1028 12:26:54.265235  113146 command_runner.go:130] > # log_level = "info"
	I1028 12:26:54.265246  113146 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 12:26:54.265256  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.265265  113146 command_runner.go:130] > # log_filter = ""
	I1028 12:26:54.265274  113146 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 12:26:54.265286  113146 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 12:26:54.265295  113146 command_runner.go:130] > # separated by comma.
	I1028 12:26:54.265313  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265323  113146 command_runner.go:130] > # uid_mappings = ""
	I1028 12:26:54.265332  113146 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 12:26:54.265351  113146 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 12:26:54.265361  113146 command_runner.go:130] > # separated by comma.
	I1028 12:26:54.265373  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265382  113146 command_runner.go:130] > # gid_mappings = ""
	I1028 12:26:54.265392  113146 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 12:26:54.265401  113146 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 12:26:54.265410  113146 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 12:26:54.265419  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265425  113146 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 12:26:54.265431  113146 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 12:26:54.265439  113146 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 12:26:54.265446  113146 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 12:26:54.265453  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265459  113146 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 12:26:54.265465  113146 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 12:26:54.265472  113146 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 12:26:54.265478  113146 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 12:26:54.265487  113146 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 12:26:54.265496  113146 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 12:26:54.265508  113146 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 12:26:54.265518  113146 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 12:26:54.265526  113146 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 12:26:54.265543  113146 command_runner.go:130] > drop_infra_ctr = false
	I1028 12:26:54.265555  113146 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 12:26:54.265567  113146 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 12:26:54.265581  113146 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 12:26:54.265590  113146 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 12:26:54.265601  113146 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 12:26:54.265614  113146 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 12:26:54.265626  113146 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 12:26:54.265641  113146 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 12:26:54.265650  113146 command_runner.go:130] > # shared_cpuset = ""
	I1028 12:26:54.265660  113146 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 12:26:54.265677  113146 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 12:26:54.265685  113146 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 12:26:54.265695  113146 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 12:26:54.265704  113146 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 12:26:54.265716  113146 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 12:26:54.265731  113146 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 12:26:54.265741  113146 command_runner.go:130] > # enable_criu_support = false
	I1028 12:26:54.265752  113146 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 12:26:54.265765  113146 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 12:26:54.265775  113146 command_runner.go:130] > # enable_pod_events = false
	I1028 12:26:54.265784  113146 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 12:26:54.265795  113146 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 12:26:54.265807  113146 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 12:26:54.265816  113146 command_runner.go:130] > # default_runtime = "runc"
	I1028 12:26:54.265828  113146 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 12:26:54.265842  113146 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 12:26:54.265858  113146 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 12:26:54.265867  113146 command_runner.go:130] > # creation as a file is not desired either.
	I1028 12:26:54.265875  113146 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 12:26:54.265885  113146 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 12:26:54.265895  113146 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 12:26:54.265903  113146 command_runner.go:130] > # ]
	I1028 12:26:54.265914  113146 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 12:26:54.265927  113146 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 12:26:54.265939  113146 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 12:26:54.265949  113146 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 12:26:54.265957  113146 command_runner.go:130] > #
	I1028 12:26:54.265967  113146 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 12:26:54.265975  113146 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 12:26:54.266029  113146 command_runner.go:130] > # runtime_type = "oci"
	I1028 12:26:54.266042  113146 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 12:26:54.266050  113146 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 12:26:54.266061  113146 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 12:26:54.266078  113146 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 12:26:54.266088  113146 command_runner.go:130] > # monitor_env = []
	I1028 12:26:54.266098  113146 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 12:26:54.266107  113146 command_runner.go:130] > # allowed_annotations = []
	I1028 12:26:54.266117  113146 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 12:26:54.266123  113146 command_runner.go:130] > # Where:
	I1028 12:26:54.266132  113146 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 12:26:54.266144  113146 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 12:26:54.266157  113146 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 12:26:54.266169  113146 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 12:26:54.266177  113146 command_runner.go:130] > #   in $PATH.
	I1028 12:26:54.266190  113146 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 12:26:54.266200  113146 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 12:26:54.266209  113146 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 12:26:54.266216  113146 command_runner.go:130] > #   state.
	I1028 12:26:54.266229  113146 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 12:26:54.266241  113146 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 12:26:54.266253  113146 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 12:26:54.266264  113146 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 12:26:54.266275  113146 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 12:26:54.266288  113146 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 12:26:54.266295  113146 command_runner.go:130] > #   The currently recognized values are:
	I1028 12:26:54.266305  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 12:26:54.266318  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 12:26:54.266330  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 12:26:54.266341  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 12:26:54.266356  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 12:26:54.266368  113146 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 12:26:54.266380  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 12:26:54.266388  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 12:26:54.266405  113146 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 12:26:54.266417  113146 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 12:26:54.266427  113146 command_runner.go:130] > #   deprecated option "conmon".
	I1028 12:26:54.266447  113146 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 12:26:54.266458  113146 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 12:26:54.266470  113146 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 12:26:54.266480  113146 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 12:26:54.266490  113146 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 12:26:54.266500  113146 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 12:26:54.266514  113146 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 12:26:54.266525  113146 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 12:26:54.266533  113146 command_runner.go:130] > #
	I1028 12:26:54.266544  113146 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 12:26:54.266552  113146 command_runner.go:130] > #
	I1028 12:26:54.266561  113146 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 12:26:54.266573  113146 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 12:26:54.266580  113146 command_runner.go:130] > #
	I1028 12:26:54.266586  113146 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 12:26:54.266598  113146 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 12:26:54.266608  113146 command_runner.go:130] > #
	I1028 12:26:54.266618  113146 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 12:26:54.266627  113146 command_runner.go:130] > # feature.
	I1028 12:26:54.266635  113146 command_runner.go:130] > #
	I1028 12:26:54.266648  113146 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 12:26:54.266661  113146 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 12:26:54.266673  113146 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 12:26:54.266682  113146 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 12:26:54.266694  113146 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 12:26:54.266703  113146 command_runner.go:130] > #
	I1028 12:26:54.266712  113146 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 12:26:54.266725  113146 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 12:26:54.266733  113146 command_runner.go:130] > #
	I1028 12:26:54.266743  113146 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 12:26:54.266755  113146 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 12:26:54.266763  113146 command_runner.go:130] > #
	I1028 12:26:54.266772  113146 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 12:26:54.266789  113146 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 12:26:54.266797  113146 command_runner.go:130] > # limitation.
	I1028 12:26:54.266805  113146 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 12:26:54.266815  113146 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 12:26:54.266825  113146 command_runner.go:130] > runtime_type = "oci"
	I1028 12:26:54.266832  113146 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 12:26:54.266842  113146 command_runner.go:130] > runtime_config_path = ""
	I1028 12:26:54.266849  113146 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 12:26:54.266858  113146 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 12:26:54.266864  113146 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 12:26:54.266871  113146 command_runner.go:130] > monitor_env = [
	I1028 12:26:54.266881  113146 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 12:26:54.266888  113146 command_runner.go:130] > ]
	I1028 12:26:54.266896  113146 command_runner.go:130] > privileged_without_host_devices = false
	I1028 12:26:54.266909  113146 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 12:26:54.266920  113146 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 12:26:54.266931  113146 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 12:26:54.266945  113146 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 12:26:54.266957  113146 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 12:26:54.266968  113146 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 12:26:54.266985  113146 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 12:26:54.267001  113146 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 12:26:54.267012  113146 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 12:26:54.267026  113146 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 12:26:54.267034  113146 command_runner.go:130] > # Example:
	I1028 12:26:54.267041  113146 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 12:26:54.267048  113146 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 12:26:54.267058  113146 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 12:26:54.267069  113146 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 12:26:54.267078  113146 command_runner.go:130] > # cpuset = 0
	I1028 12:26:54.267087  113146 command_runner.go:130] > # cpushares = "0-1"
	I1028 12:26:54.267093  113146 command_runner.go:130] > # Where:
	I1028 12:26:54.267103  113146 command_runner.go:130] > # The workload name is workload-type.
	I1028 12:26:54.267122  113146 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 12:26:54.267132  113146 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 12:26:54.267144  113146 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 12:26:54.267159  113146 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 12:26:54.267171  113146 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 12:26:54.267182  113146 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 12:26:54.267194  113146 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 12:26:54.267204  113146 command_runner.go:130] > # Default value is set to true
	I1028 12:26:54.267211  113146 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 12:26:54.267217  113146 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 12:26:54.267228  113146 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 12:26:54.267238  113146 command_runner.go:130] > # Default value is set to 'false'
	I1028 12:26:54.267248  113146 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 12:26:54.267261  113146 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 12:26:54.267268  113146 command_runner.go:130] > #
	I1028 12:26:54.267277  113146 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 12:26:54.267288  113146 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 12:26:54.267295  113146 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 12:26:54.267305  113146 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 12:26:54.267313  113146 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 12:26:54.267319  113146 command_runner.go:130] > [crio.image]
	I1028 12:26:54.267328  113146 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 12:26:54.267335  113146 command_runner.go:130] > # default_transport = "docker://"
	I1028 12:26:54.267351  113146 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 12:26:54.267361  113146 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 12:26:54.267368  113146 command_runner.go:130] > # global_auth_file = ""
	I1028 12:26:54.267378  113146 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 12:26:54.267386  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.267393  113146 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 12:26:54.267406  113146 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 12:26:54.267419  113146 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 12:26:54.267430  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.267443  113146 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 12:26:54.267461  113146 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 12:26:54.267470  113146 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 12:26:54.267481  113146 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 12:26:54.267494  113146 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 12:26:54.267503  113146 command_runner.go:130] > # pause_command = "/pause"
	I1028 12:26:54.267513  113146 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 12:26:54.267525  113146 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 12:26:54.267536  113146 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 12:26:54.267548  113146 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 12:26:54.267556  113146 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 12:26:54.267567  113146 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 12:26:54.267578  113146 command_runner.go:130] > # pinned_images = [
	I1028 12:26:54.267583  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267595  113146 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 12:26:54.267608  113146 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 12:26:54.267621  113146 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 12:26:54.267651  113146 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 12:26:54.267666  113146 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 12:26:54.267676  113146 command_runner.go:130] > # signature_policy = ""
	I1028 12:26:54.267686  113146 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 12:26:54.267699  113146 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 12:26:54.267709  113146 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 12:26:54.267718  113146 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 12:26:54.267729  113146 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 12:26:54.267740  113146 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 12:26:54.267752  113146 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 12:26:54.267765  113146 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 12:26:54.267774  113146 command_runner.go:130] > # changing them here.
	I1028 12:26:54.267784  113146 command_runner.go:130] > # insecure_registries = [
	I1028 12:26:54.267792  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267798  113146 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 12:26:54.267807  113146 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 12:26:54.267817  113146 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 12:26:54.267837  113146 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 12:26:54.267847  113146 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 12:26:54.267859  113146 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 12:26:54.267867  113146 command_runner.go:130] > # CNI plugins.
	I1028 12:26:54.267873  113146 command_runner.go:130] > [crio.network]
	I1028 12:26:54.267882  113146 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 12:26:54.267892  113146 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 12:26:54.267901  113146 command_runner.go:130] > # cni_default_network = ""
	I1028 12:26:54.267915  113146 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 12:26:54.267925  113146 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 12:26:54.267936  113146 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 12:26:54.267945  113146 command_runner.go:130] > # plugin_dirs = [
	I1028 12:26:54.267954  113146 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 12:26:54.267959  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267969  113146 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 12:26:54.267975  113146 command_runner.go:130] > [crio.metrics]
	I1028 12:26:54.267982  113146 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 12:26:54.267992  113146 command_runner.go:130] > enable_metrics = true
	I1028 12:26:54.268003  113146 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 12:26:54.268013  113146 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 12:26:54.268026  113146 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 12:26:54.268037  113146 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 12:26:54.268049  113146 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 12:26:54.268056  113146 command_runner.go:130] > # metrics_collectors = [
	I1028 12:26:54.268059  113146 command_runner.go:130] > # 	"operations",
	I1028 12:26:54.268069  113146 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 12:26:54.268079  113146 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 12:26:54.268089  113146 command_runner.go:130] > # 	"operations_errors",
	I1028 12:26:54.268098  113146 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 12:26:54.268108  113146 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 12:26:54.268117  113146 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 12:26:54.268125  113146 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 12:26:54.268134  113146 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 12:26:54.268147  113146 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 12:26:54.268158  113146 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 12:26:54.268169  113146 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 12:26:54.268180  113146 command_runner.go:130] > # 	"containers_oom_total",
	I1028 12:26:54.268188  113146 command_runner.go:130] > # 	"containers_oom",
	I1028 12:26:54.268197  113146 command_runner.go:130] > # 	"processes_defunct",
	I1028 12:26:54.268207  113146 command_runner.go:130] > # 	"operations_total",
	I1028 12:26:54.268216  113146 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 12:26:54.268224  113146 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 12:26:54.268230  113146 command_runner.go:130] > # 	"operations_errors_total",
	I1028 12:26:54.268243  113146 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 12:26:54.268254  113146 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 12:26:54.268264  113146 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 12:26:54.268273  113146 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 12:26:54.268282  113146 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 12:26:54.268292  113146 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 12:26:54.268302  113146 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 12:26:54.268310  113146 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 12:26:54.268314  113146 command_runner.go:130] > # ]
	I1028 12:26:54.268324  113146 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 12:26:54.268333  113146 command_runner.go:130] > # metrics_port = 9090
	I1028 12:26:54.268345  113146 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 12:26:54.268354  113146 command_runner.go:130] > # metrics_socket = ""
	I1028 12:26:54.268362  113146 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 12:26:54.268374  113146 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 12:26:54.268386  113146 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 12:26:54.268394  113146 command_runner.go:130] > # certificate on any modification event.
	I1028 12:26:54.268401  113146 command_runner.go:130] > # metrics_cert = ""
	I1028 12:26:54.268410  113146 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 12:26:54.268421  113146 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 12:26:54.268430  113146 command_runner.go:130] > # metrics_key = ""
	I1028 12:26:54.268442  113146 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 12:26:54.268452  113146 command_runner.go:130] > [crio.tracing]
	I1028 12:26:54.268469  113146 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 12:26:54.268477  113146 command_runner.go:130] > # enable_tracing = false
	I1028 12:26:54.268485  113146 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 12:26:54.268491  113146 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 12:26:54.268504  113146 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 12:26:54.268515  113146 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 12:26:54.268524  113146 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 12:26:54.268530  113146 command_runner.go:130] > [crio.nri]
	I1028 12:26:54.268540  113146 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 12:26:54.268549  113146 command_runner.go:130] > # enable_nri = false
	I1028 12:26:54.268559  113146 command_runner.go:130] > # NRI socket to listen on.
	I1028 12:26:54.268567  113146 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 12:26:54.268573  113146 command_runner.go:130] > # NRI plugin directory to use.
	I1028 12:26:54.268580  113146 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 12:26:54.268591  113146 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 12:26:54.268603  113146 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 12:26:54.268614  113146 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 12:26:54.268624  113146 command_runner.go:130] > # nri_disable_connections = false
	I1028 12:26:54.268632  113146 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 12:26:54.268646  113146 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 12:26:54.268654  113146 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 12:26:54.268659  113146 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 12:26:54.268665  113146 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 12:26:54.268670  113146 command_runner.go:130] > [crio.stats]
	I1028 12:26:54.268676  113146 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 12:26:54.268686  113146 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 12:26:54.268697  113146 command_runner.go:130] > # stats_collection_period = 0
	I1028 12:26:54.268744  113146 command_runner.go:130] ! time="2024-10-28 12:26:54.218181467Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 12:26:54.268772  113146 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 12:26:54.268878  113146 cni.go:84] Creating CNI manager for ""
	I1028 12:26:54.268894  113146 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 12:26:54.268907  113146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:26:54.268931  113146 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-363277 NodeName:multinode-363277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:26:54.269074  113146 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-363277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:26:54.269145  113146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:26:54.278934  113146 command_runner.go:130] > kubeadm
	I1028 12:26:54.278947  113146 command_runner.go:130] > kubectl
	I1028 12:26:54.278951  113146 command_runner.go:130] > kubelet
	I1028 12:26:54.279100  113146 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:26:54.279154  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:26:54.287998  113146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 12:26:54.305540  113146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:26:54.321789  113146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 12:26:54.339762  113146 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1028 12:26:54.343602  113146 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I1028 12:26:54.343840  113146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:26:54.488413  113146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:26:54.502072  113146 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277 for IP: 192.168.39.174
	I1028 12:26:54.502100  113146 certs.go:194] generating shared ca certs ...
	I1028 12:26:54.502137  113146 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:26:54.502336  113146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:26:54.502401  113146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:26:54.502409  113146 certs.go:256] generating profile certs ...
	I1028 12:26:54.502491  113146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/client.key
	I1028 12:26:54.502547  113146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key.b804b213
	I1028 12:26:54.502584  113146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key
	I1028 12:26:54.502597  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:26:54.502610  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:26:54.502628  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:26:54.502638  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:26:54.502648  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:26:54.502659  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:26:54.502678  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:26:54.502693  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:26:54.502739  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:26:54.502764  113146 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:26:54.502776  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:26:54.502815  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:26:54.502857  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:26:54.502884  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:26:54.502931  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:26:54.502957  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.502970  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.502982  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.503565  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:26:54.528654  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:26:54.549978  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:26:54.571092  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:26:54.593005  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:26:54.615090  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:26:54.635764  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:26:54.656553  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:26:54.677848  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:26:54.699019  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:26:54.721786  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:26:54.743434  113146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:26:54.758271  113146 ssh_runner.go:195] Run: openssl version
	I1028 12:26:54.763518  113146 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 12:26:54.763603  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:26:54.773016  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777047  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777074  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777107  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.782229  113146 command_runner.go:130] > 51391683
	I1028 12:26:54.782291  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:26:54.790771  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:26:54.801042  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805011  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805041  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805084  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.810151  113146 command_runner.go:130] > 3ec20f2e
	I1028 12:26:54.810223  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:26:54.818484  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:26:54.827906  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831689  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831712  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831740  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.836745  113146 command_runner.go:130] > b5213941
	I1028 12:26:54.836800  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:26:54.844944  113146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:26:54.848761  113146 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:26:54.848780  113146 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 12:26:54.848786  113146 command_runner.go:130] > Device: 253,1	Inode: 6291502     Links: 1
	I1028 12:26:54.848793  113146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:26:54.848799  113146 command_runner.go:130] > Access: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848804  113146 command_runner.go:130] > Modify: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848809  113146 command_runner.go:130] > Change: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848813  113146 command_runner.go:130] >  Birth: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848857  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:26:54.854031  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.854097  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:26:54.859173  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.859364  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:26:54.864309  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.864357  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:26:54.869173  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.869225  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:26:54.873907  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.874149  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:26:54.879331  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.879389  113146 kubeadm.go:392] StartCluster: {Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:26:54.879492  113146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:26:54.879544  113146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:26:54.911765  113146 command_runner.go:130] > b03df03ee35acaf8c99f2c47a6678ed30444be7f72058e32adb573b7b6544dd3
	I1028 12:26:54.911816  113146 command_runner.go:130] > b4d0124a73f58166690efbc377a141df990cc4edac70890411b1a0e278a3c374
	I1028 12:26:54.911825  113146 command_runner.go:130] > 1d507030ea898557fee2b19376e88b69ae364b5aeeec9fb7555f6a6e040cf447
	I1028 12:26:54.911833  113146 command_runner.go:130] > ffa26b10a3810791a68c757fbe3481291d2e771ac8fcf67a662cc067572e7132
	I1028 12:26:54.911838  113146 command_runner.go:130] > 7cecd815f01756107482ffad4e85dc0db4c2b4ef09a12d0b056b5c368d487c59
	I1028 12:26:54.911844  113146 command_runner.go:130] > 6bb5157fc0fd9e1ef085e28966d2d297ccad22275908cd65958962a7cf675b4f
	I1028 12:26:54.911849  113146 command_runner.go:130] > 1d570edc04e5aa175f4a56b27634b7e47b995768bae965e2814c6fb9d95a9969
	I1028 12:26:54.911862  113146 command_runner.go:130] > dc179a1c5110656277e56e9c5310384a548e8c498c63ea4c8582e983c3a50328
	I1028 12:26:54.913358  113146 cri.go:89] found id: "b03df03ee35acaf8c99f2c47a6678ed30444be7f72058e32adb573b7b6544dd3"
	I1028 12:26:54.913373  113146 cri.go:89] found id: "b4d0124a73f58166690efbc377a141df990cc4edac70890411b1a0e278a3c374"
	I1028 12:26:54.913378  113146 cri.go:89] found id: "1d507030ea898557fee2b19376e88b69ae364b5aeeec9fb7555f6a6e040cf447"
	I1028 12:26:54.913381  113146 cri.go:89] found id: "ffa26b10a3810791a68c757fbe3481291d2e771ac8fcf67a662cc067572e7132"
	I1028 12:26:54.913384  113146 cri.go:89] found id: "7cecd815f01756107482ffad4e85dc0db4c2b4ef09a12d0b056b5c368d487c59"
	I1028 12:26:54.913387  113146 cri.go:89] found id: "6bb5157fc0fd9e1ef085e28966d2d297ccad22275908cd65958962a7cf675b4f"
	I1028 12:26:54.913390  113146 cri.go:89] found id: "1d570edc04e5aa175f4a56b27634b7e47b995768bae965e2814c6fb9d95a9969"
	I1028 12:26:54.913392  113146 cri.go:89] found id: "dc179a1c5110656277e56e9c5310384a548e8c498c63ea4c8582e983c3a50328"
	I1028 12:26:54.913394  113146 cri.go:89] found id: ""
	I1028 12:26:54.913437  113146 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-363277 -n multinode-363277
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-363277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 stop
E1028 12:29:20.375963   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:30:16.515828   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-363277 stop: exit status 82 (2m0.461292193s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-363277-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-363277 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 status: (18.784267353s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr: (3.360174618s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-363277 -n multinode-363277
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 logs -n 25: (1.916442932s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277:/home/docker/cp-test_multinode-363277-m02_multinode-363277.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277 sudo cat                                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m02_multinode-363277.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03:/home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277-m03 sudo cat                                   | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp testdata/cp-test.txt                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277:/home/docker/cp-test_multinode-363277-m03_multinode-363277.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277 sudo cat                                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m03_multinode-363277.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt                       | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m02:/home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n                                                                 | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | multinode-363277-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-363277 ssh -n multinode-363277-m02 sudo cat                                   | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	|         | /home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-363277 node stop m03                                                          | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:22 UTC |
	| node    | multinode-363277 node start                                                             | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:22 UTC | 28 Oct 24 12:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-363277                                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:23 UTC |                     |
	| stop    | -p multinode-363277                                                                     | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:23 UTC |                     |
	| start   | -p multinode-363277                                                                     | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:25 UTC | 28 Oct 24 12:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-363277                                                                | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:28 UTC |                     |
	| node    | multinode-363277 node delete                                                            | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-363277 stop                                                                   | multinode-363277 | jenkins | v1.34.0 | 28 Oct 24 12:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:25:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:25:15.370368  113146 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:25:15.370607  113146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:15.370615  113146 out.go:358] Setting ErrFile to fd 2...
	I1028 12:25:15.370619  113146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:25:15.370769  113146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:25:15.371278  113146 out.go:352] Setting JSON to false
	I1028 12:25:15.372181  113146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7665,"bootTime":1730110650,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:25:15.372297  113146 start.go:139] virtualization: kvm guest
	I1028 12:25:15.374466  113146 out.go:177] * [multinode-363277] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:25:15.375848  113146 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:25:15.375922  113146 notify.go:220] Checking for updates...
	I1028 12:25:15.378475  113146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:25:15.379805  113146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:25:15.381150  113146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:25:15.382257  113146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:25:15.383375  113146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:25:15.385045  113146 config.go:182] Loaded profile config "multinode-363277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:25:15.385136  113146 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:25:15.385572  113146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:25:15.385639  113146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:25:15.400577  113146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38391
	I1028 12:25:15.401049  113146 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:25:15.401685  113146 main.go:141] libmachine: Using API Version  1
	I1028 12:25:15.401718  113146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:25:15.402055  113146 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:25:15.402243  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.436369  113146 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:25:15.437480  113146 start.go:297] selected driver: kvm2
	I1028 12:25:15.437492  113146 start.go:901] validating driver "kvm2" against &{Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:25:15.437634  113146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:25:15.437966  113146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:25:15.438044  113146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:25:15.452188  113146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:25:15.452821  113146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:25:15.452850  113146 cni.go:84] Creating CNI manager for ""
	I1028 12:25:15.452905  113146 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 12:25:15.452954  113146 start.go:340] cluster config:
	{Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:25:15.453085  113146 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:25:15.455367  113146 out.go:177] * Starting "multinode-363277" primary control-plane node in "multinode-363277" cluster
	I1028 12:25:15.456479  113146 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:25:15.456524  113146 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:25:15.456535  113146 cache.go:56] Caching tarball of preloaded images
	I1028 12:25:15.456613  113146 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:25:15.456624  113146 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:25:15.456731  113146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/config.json ...
	I1028 12:25:15.456913  113146 start.go:360] acquireMachinesLock for multinode-363277: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:25:15.456953  113146 start.go:364] duration metric: took 23.29µs to acquireMachinesLock for "multinode-363277"
	I1028 12:25:15.456967  113146 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:25:15.456975  113146 fix.go:54] fixHost starting: 
	I1028 12:25:15.457219  113146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:25:15.457249  113146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:25:15.470744  113146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1028 12:25:15.471181  113146 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:25:15.471675  113146 main.go:141] libmachine: Using API Version  1
	I1028 12:25:15.471693  113146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:25:15.472055  113146 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:25:15.472251  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.472376  113146 main.go:141] libmachine: (multinode-363277) Calling .GetState
	I1028 12:25:15.473815  113146 fix.go:112] recreateIfNeeded on multinode-363277: state=Running err=<nil>
	W1028 12:25:15.473836  113146 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:25:15.476507  113146 out.go:177] * Updating the running kvm2 "multinode-363277" VM ...
	I1028 12:25:15.477923  113146 machine.go:93] provisionDockerMachine start ...
	I1028 12:25:15.477941  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:25:15.478122  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.480515  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.480951  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.480987  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.481132  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.481290  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.481429  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.481556  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.481678  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.481932  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.481947  113146 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:25:15.584338  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-363277
	
	I1028 12:25:15.584372  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.584680  113146 buildroot.go:166] provisioning hostname "multinode-363277"
	I1028 12:25:15.584711  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.584898  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.587623  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.588014  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.588042  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.588158  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.588322  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.588484  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.588631  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.588777  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.588941  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.588953  113146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-363277 && echo "multinode-363277" | sudo tee /etc/hostname
	I1028 12:25:15.701746  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-363277
	
	I1028 12:25:15.701770  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.704417  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.704828  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.704860  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.705017  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.705198  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.705370  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.705502  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.705630  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:15.705799  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:15.705821  113146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-363277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-363277/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-363277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:25:15.803978  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:25:15.804006  113146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:25:15.804043  113146 buildroot.go:174] setting up certificates
	I1028 12:25:15.804058  113146 provision.go:84] configureAuth start
	I1028 12:25:15.804073  113146 main.go:141] libmachine: (multinode-363277) Calling .GetMachineName
	I1028 12:25:15.804365  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:25:15.807139  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.807507  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.807551  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.807670  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.809821  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.810140  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.810163  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.810299  113146 provision.go:143] copyHostCerts
	I1028 12:25:15.810331  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:25:15.810382  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:25:15.810397  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:25:15.810463  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:25:15.810560  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:25:15.810582  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:25:15.810587  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:25:15.810613  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:25:15.810671  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:25:15.810686  113146 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:25:15.810692  113146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:25:15.810713  113146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:25:15.810776  113146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.multinode-363277 san=[127.0.0.1 192.168.39.174 localhost minikube multinode-363277]
	I1028 12:25:15.883401  113146 provision.go:177] copyRemoteCerts
	I1028 12:25:15.883464  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:25:15.883490  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:15.886015  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.886337  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:15.886364  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:15.886495  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:15.886692  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:15.886854  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:15.886987  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:25:15.965142  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 12:25:15.965210  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:25:15.988401  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 12:25:15.988464  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 12:25:16.009689  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 12:25:16.009749  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:25:16.030864  113146 provision.go:87] duration metric: took 226.793505ms to configureAuth
	I1028 12:25:16.030892  113146 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:25:16.031098  113146 config.go:182] Loaded profile config "multinode-363277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:25:16.031172  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:25:16.033704  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:16.034077  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:25:16.034111  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:25:16.034276  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:25:16.034430  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:16.034601  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:25:16.034771  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:25:16.034927  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:25:16.035150  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:25:16.035168  113146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:26:46.629890  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:26:46.629940  113146 machine.go:96] duration metric: took 1m31.152002375s to provisionDockerMachine
	I1028 12:26:46.629964  113146 start.go:293] postStartSetup for "multinode-363277" (driver="kvm2")
	I1028 12:26:46.629980  113146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:26:46.630008  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.630331  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:26:46.630383  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.633471  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.633910  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.633939  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.634205  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.634410  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.634635  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.634786  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.714803  113146 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:26:46.718535  113146 command_runner.go:130] > NAME=Buildroot
	I1028 12:26:46.718558  113146 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 12:26:46.718564  113146 command_runner.go:130] > ID=buildroot
	I1028 12:26:46.718569  113146 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 12:26:46.718576  113146 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 12:26:46.718637  113146 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:26:46.718650  113146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:26:46.718703  113146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:26:46.718784  113146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:26:46.718798  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /etc/ssl/certs/849652.pem
	I1028 12:26:46.718895  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:26:46.728124  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:26:46.750168  113146 start.go:296] duration metric: took 120.189513ms for postStartSetup
	I1028 12:26:46.750210  113146 fix.go:56] duration metric: took 1m31.29323411s for fixHost
	I1028 12:26:46.750241  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.753198  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.753692  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.753721  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.753866  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.754062  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.754239  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.754392  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.754580  113146 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:46.754786  113146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1028 12:26:46.754800  113146 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:26:46.852227  113146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118406.822489579
	
	I1028 12:26:46.852250  113146 fix.go:216] guest clock: 1730118406.822489579
	I1028 12:26:46.852259  113146 fix.go:229] Guest: 2024-10-28 12:26:46.822489579 +0000 UTC Remote: 2024-10-28 12:26:46.750215468 +0000 UTC m=+91.418749930 (delta=72.274111ms)
	I1028 12:26:46.852286  113146 fix.go:200] guest clock delta is within tolerance: 72.274111ms
	I1028 12:26:46.852293  113146 start.go:83] releasing machines lock for "multinode-363277", held for 1m31.395330787s
	I1028 12:26:46.852333  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.852620  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:26:46.855384  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.855815  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.855848  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.855972  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856438  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856623  113146 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:26:46.856740  113146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:26:46.856801  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.856820  113146 ssh_runner.go:195] Run: cat /version.json
	I1028 12:26:46.856860  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:26:46.859523  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859594  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859916  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.859946  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.859973  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:46.860038  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:46.860054  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.860217  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:26:46.860231  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.860380  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:26:46.860454  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.860507  113146 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:26:46.860571  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.860616  113146 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:26:46.931508  113146 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 12:26:46.931716  113146 ssh_runner.go:195] Run: systemctl --version
	I1028 12:26:46.960787  113146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 12:26:46.961386  113146 command_runner.go:130] > systemd 252 (252)
	I1028 12:26:46.961428  113146 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 12:26:46.961479  113146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:26:47.116759  113146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:26:47.122101  113146 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 12:26:47.122171  113146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:26:47.122254  113146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:26:47.130807  113146 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:26:47.130830  113146 start.go:495] detecting cgroup driver to use...
	I1028 12:26:47.130892  113146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:26:47.146990  113146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:26:47.160122  113146 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:26:47.160181  113146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:26:47.172032  113146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:26:47.183741  113146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:26:47.320690  113146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:26:47.464238  113146 docker.go:233] disabling docker service ...
	I1028 12:26:47.464310  113146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:26:47.479746  113146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:26:47.492971  113146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:26:47.630739  113146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:26:47.768326  113146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:26:47.781151  113146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:26:47.798101  113146 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 12:26:47.798141  113146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:26:47.798184  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.808732  113146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:26:47.808791  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.818387  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.827879  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.837125  113146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:26:47.846684  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.856098  113146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.865551  113146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:26:47.874789  113146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:26:47.883132  113146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 12:26:47.883201  113146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:26:47.891960  113146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:26:48.025332  113146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:26:54.029894  113146 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.004523549s)
	I1028 12:26:54.029923  113146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:26:54.029968  113146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:26:54.036371  113146 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 12:26:54.036394  113146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 12:26:54.036401  113146 command_runner.go:130] > Device: 0,22	Inode: 1270        Links: 1
	I1028 12:26:54.036408  113146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:26:54.036413  113146 command_runner.go:130] > Access: 2024-10-28 12:26:53.928012448 +0000
	I1028 12:26:54.036418  113146 command_runner.go:130] > Modify: 2024-10-28 12:26:53.895011771 +0000
	I1028 12:26:54.036423  113146 command_runner.go:130] > Change: 2024-10-28 12:26:53.895011771 +0000
	I1028 12:26:54.036430  113146 command_runner.go:130] >  Birth: -
	I1028 12:26:54.036592  113146 start.go:563] Will wait 60s for crictl version
	I1028 12:26:54.036651  113146 ssh_runner.go:195] Run: which crictl
	I1028 12:26:54.040127  113146 command_runner.go:130] > /usr/bin/crictl
	I1028 12:26:54.040260  113146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:26:54.074039  113146 command_runner.go:130] > Version:  0.1.0
	I1028 12:26:54.074061  113146 command_runner.go:130] > RuntimeName:  cri-o
	I1028 12:26:54.074067  113146 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 12:26:54.074072  113146 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 12:26:54.075283  113146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:26:54.075359  113146 ssh_runner.go:195] Run: crio --version
	I1028 12:26:54.103472  113146 command_runner.go:130] > crio version 1.29.1
	I1028 12:26:54.103493  113146 command_runner.go:130] > Version:        1.29.1
	I1028 12:26:54.103499  113146 command_runner.go:130] > GitCommit:      unknown
	I1028 12:26:54.103503  113146 command_runner.go:130] > GitCommitDate:  unknown
	I1028 12:26:54.103523  113146 command_runner.go:130] > GitTreeState:   clean
	I1028 12:26:54.103530  113146 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 12:26:54.103535  113146 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 12:26:54.103539  113146 command_runner.go:130] > Compiler:       gc
	I1028 12:26:54.103545  113146 command_runner.go:130] > Platform:       linux/amd64
	I1028 12:26:54.103553  113146 command_runner.go:130] > Linkmode:       dynamic
	I1028 12:26:54.103562  113146 command_runner.go:130] > BuildTags:      
	I1028 12:26:54.103569  113146 command_runner.go:130] >   containers_image_ostree_stub
	I1028 12:26:54.103573  113146 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 12:26:54.103577  113146 command_runner.go:130] >   btrfs_noversion
	I1028 12:26:54.103581  113146 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 12:26:54.103586  113146 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 12:26:54.103590  113146 command_runner.go:130] >   seccomp
	I1028 12:26:54.103597  113146 command_runner.go:130] > LDFlags:          unknown
	I1028 12:26:54.103601  113146 command_runner.go:130] > SeccompEnabled:   true
	I1028 12:26:54.103605  113146 command_runner.go:130] > AppArmorEnabled:  false
	I1028 12:26:54.104655  113146 ssh_runner.go:195] Run: crio --version
	I1028 12:26:54.130593  113146 command_runner.go:130] > crio version 1.29.1
	I1028 12:26:54.130617  113146 command_runner.go:130] > Version:        1.29.1
	I1028 12:26:54.130625  113146 command_runner.go:130] > GitCommit:      unknown
	I1028 12:26:54.130632  113146 command_runner.go:130] > GitCommitDate:  unknown
	I1028 12:26:54.130638  113146 command_runner.go:130] > GitTreeState:   clean
	I1028 12:26:54.130646  113146 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 12:26:54.130652  113146 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 12:26:54.130658  113146 command_runner.go:130] > Compiler:       gc
	I1028 12:26:54.130664  113146 command_runner.go:130] > Platform:       linux/amd64
	I1028 12:26:54.130671  113146 command_runner.go:130] > Linkmode:       dynamic
	I1028 12:26:54.130677  113146 command_runner.go:130] > BuildTags:      
	I1028 12:26:54.130685  113146 command_runner.go:130] >   containers_image_ostree_stub
	I1028 12:26:54.130693  113146 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 12:26:54.130703  113146 command_runner.go:130] >   btrfs_noversion
	I1028 12:26:54.130710  113146 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 12:26:54.130718  113146 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 12:26:54.130746  113146 command_runner.go:130] >   seccomp
	I1028 12:26:54.130756  113146 command_runner.go:130] > LDFlags:          unknown
	I1028 12:26:54.130763  113146 command_runner.go:130] > SeccompEnabled:   true
	I1028 12:26:54.130770  113146 command_runner.go:130] > AppArmorEnabled:  false
	I1028 12:26:54.133656  113146 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:26:54.135136  113146 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:26:54.138093  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:54.138450  113146 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:26:54.138468  113146 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:26:54.138689  113146 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:26:54.142520  113146 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 12:26:54.142608  113146 kubeadm.go:883] updating cluster {Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:26:54.142758  113146 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:26:54.142799  113146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:26:54.178788  113146 command_runner.go:130] > {
	I1028 12:26:54.178813  113146 command_runner.go:130] >   "images": [
	I1028 12:26:54.178818  113146 command_runner.go:130] >     {
	I1028 12:26:54.178828  113146 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 12:26:54.178833  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.178847  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 12:26:54.178853  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178857  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.178866  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 12:26:54.178873  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 12:26:54.178879  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178884  113146 command_runner.go:130] >       "size": "94965812",
	I1028 12:26:54.178888  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.178892  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.178900  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.178905  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.178908  113146 command_runner.go:130] >     },
	I1028 12:26:54.178913  113146 command_runner.go:130] >     {
	I1028 12:26:54.178919  113146 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 12:26:54.178924  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.178929  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 12:26:54.178934  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178938  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.178945  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 12:26:54.178952  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 12:26:54.178959  113146 command_runner.go:130] >       ],
	I1028 12:26:54.178963  113146 command_runner.go:130] >       "size": "1363676",
	I1028 12:26:54.178967  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.178981  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.178987  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.178991  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.178994  113146 command_runner.go:130] >     },
	I1028 12:26:54.178998  113146 command_runner.go:130] >     {
	I1028 12:26:54.179004  113146 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 12:26:54.179010  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179015  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 12:26:54.179019  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179025  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179033  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 12:26:54.179044  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 12:26:54.179051  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179055  113146 command_runner.go:130] >       "size": "31470524",
	I1028 12:26:54.179059  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179065  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179069  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179076  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179079  113146 command_runner.go:130] >     },
	I1028 12:26:54.179083  113146 command_runner.go:130] >     {
	I1028 12:26:54.179089  113146 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 12:26:54.179096  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179103  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 12:26:54.179107  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179113  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179120  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 12:26:54.179135  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 12:26:54.179141  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179145  113146 command_runner.go:130] >       "size": "63273227",
	I1028 12:26:54.179151  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179156  113146 command_runner.go:130] >       "username": "nonroot",
	I1028 12:26:54.179161  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179166  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179181  113146 command_runner.go:130] >     },
	I1028 12:26:54.179187  113146 command_runner.go:130] >     {
	I1028 12:26:54.179193  113146 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 12:26:54.179200  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179204  113146 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 12:26:54.179210  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179214  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179221  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 12:26:54.179230  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 12:26:54.179234  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179238  113146 command_runner.go:130] >       "size": "149009664",
	I1028 12:26:54.179242  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179246  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179249  113146 command_runner.go:130] >       },
	I1028 12:26:54.179253  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179257  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179261  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179264  113146 command_runner.go:130] >     },
	I1028 12:26:54.179268  113146 command_runner.go:130] >     {
	I1028 12:26:54.179274  113146 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 12:26:54.179278  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179283  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 12:26:54.179287  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179291  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179301  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 12:26:54.179308  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 12:26:54.179314  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179318  113146 command_runner.go:130] >       "size": "95274464",
	I1028 12:26:54.179321  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179325  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179329  113146 command_runner.go:130] >       },
	I1028 12:26:54.179333  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179337  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179346  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179352  113146 command_runner.go:130] >     },
	I1028 12:26:54.179355  113146 command_runner.go:130] >     {
	I1028 12:26:54.179361  113146 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 12:26:54.179368  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179373  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 12:26:54.179378  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179382  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179389  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 12:26:54.179399  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 12:26:54.179404  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179408  113146 command_runner.go:130] >       "size": "89474374",
	I1028 12:26:54.179412  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179416  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179421  113146 command_runner.go:130] >       },
	I1028 12:26:54.179425  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179429  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179434  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179438  113146 command_runner.go:130] >     },
	I1028 12:26:54.179441  113146 command_runner.go:130] >     {
	I1028 12:26:54.179447  113146 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 12:26:54.179452  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179457  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 12:26:54.179462  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179466  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179486  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 12:26:54.179496  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 12:26:54.179500  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179504  113146 command_runner.go:130] >       "size": "92783513",
	I1028 12:26:54.179508  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.179512  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179515  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179518  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179531  113146 command_runner.go:130] >     },
	I1028 12:26:54.179535  113146 command_runner.go:130] >     {
	I1028 12:26:54.179540  113146 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 12:26:54.179544  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179548  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 12:26:54.179551  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179556  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179567  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 12:26:54.179574  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 12:26:54.179577  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179581  113146 command_runner.go:130] >       "size": "68457798",
	I1028 12:26:54.179585  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179589  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.179592  113146 command_runner.go:130] >       },
	I1028 12:26:54.179596  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179599  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179603  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.179606  113146 command_runner.go:130] >     },
	I1028 12:26:54.179610  113146 command_runner.go:130] >     {
	I1028 12:26:54.179616  113146 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 12:26:54.179622  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.179636  113146 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 12:26:54.179641  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179645  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.179651  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 12:26:54.179658  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 12:26:54.179661  113146 command_runner.go:130] >       ],
	I1028 12:26:54.179665  113146 command_runner.go:130] >       "size": "742080",
	I1028 12:26:54.179669  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.179673  113146 command_runner.go:130] >         "value": "65535"
	I1028 12:26:54.179677  113146 command_runner.go:130] >       },
	I1028 12:26:54.179681  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.179685  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.179694  113146 command_runner.go:130] >       "pinned": true
	I1028 12:26:54.179700  113146 command_runner.go:130] >     }
	I1028 12:26:54.179704  113146 command_runner.go:130] >   ]
	I1028 12:26:54.179708  113146 command_runner.go:130] > }
	I1028 12:26:54.180230  113146 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:26:54.180251  113146 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:26:54.180301  113146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:26:54.210635  113146 command_runner.go:130] > {
	I1028 12:26:54.210669  113146 command_runner.go:130] >   "images": [
	I1028 12:26:54.210675  113146 command_runner.go:130] >     {
	I1028 12:26:54.210684  113146 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 12:26:54.210689  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210695  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 12:26:54.210699  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210703  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210711  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 12:26:54.210718  113146 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 12:26:54.210722  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210726  113146 command_runner.go:130] >       "size": "94965812",
	I1028 12:26:54.210730  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210737  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210743  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210749  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210755  113146 command_runner.go:130] >     },
	I1028 12:26:54.210760  113146 command_runner.go:130] >     {
	I1028 12:26:54.210770  113146 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 12:26:54.210780  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210787  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 12:26:54.210793  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210799  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210835  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 12:26:54.210847  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 12:26:54.210851  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210855  113146 command_runner.go:130] >       "size": "1363676",
	I1028 12:26:54.210859  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210866  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210870  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210874  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210883  113146 command_runner.go:130] >     },
	I1028 12:26:54.210887  113146 command_runner.go:130] >     {
	I1028 12:26:54.210893  113146 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 12:26:54.210897  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210902  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 12:26:54.210905  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210909  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210918  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 12:26:54.210928  113146 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 12:26:54.210931  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210935  113146 command_runner.go:130] >       "size": "31470524",
	I1028 12:26:54.210939  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.210943  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.210949  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.210953  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.210956  113146 command_runner.go:130] >     },
	I1028 12:26:54.210960  113146 command_runner.go:130] >     {
	I1028 12:26:54.210965  113146 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 12:26:54.210971  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.210976  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 12:26:54.210979  113146 command_runner.go:130] >       ],
	I1028 12:26:54.210983  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.210990  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 12:26:54.211004  113146 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 12:26:54.211010  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211021  113146 command_runner.go:130] >       "size": "63273227",
	I1028 12:26:54.211028  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.211032  113146 command_runner.go:130] >       "username": "nonroot",
	I1028 12:26:54.211036  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211040  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211043  113146 command_runner.go:130] >     },
	I1028 12:26:54.211047  113146 command_runner.go:130] >     {
	I1028 12:26:54.211055  113146 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 12:26:54.211059  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211081  113146 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 12:26:54.211094  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211103  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211113  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 12:26:54.211119  113146 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 12:26:54.211125  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211129  113146 command_runner.go:130] >       "size": "149009664",
	I1028 12:26:54.211133  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211137  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211141  113146 command_runner.go:130] >       },
	I1028 12:26:54.211145  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211149  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211153  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211157  113146 command_runner.go:130] >     },
	I1028 12:26:54.211160  113146 command_runner.go:130] >     {
	I1028 12:26:54.211166  113146 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 12:26:54.211172  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211177  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 12:26:54.211181  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211185  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211193  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 12:26:54.211200  113146 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 12:26:54.211206  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211210  113146 command_runner.go:130] >       "size": "95274464",
	I1028 12:26:54.211219  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211226  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211229  113146 command_runner.go:130] >       },
	I1028 12:26:54.211233  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211237  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211241  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211245  113146 command_runner.go:130] >     },
	I1028 12:26:54.211248  113146 command_runner.go:130] >     {
	I1028 12:26:54.211254  113146 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 12:26:54.211260  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211265  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 12:26:54.211271  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211275  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211282  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 12:26:54.211291  113146 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 12:26:54.211295  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211301  113146 command_runner.go:130] >       "size": "89474374",
	I1028 12:26:54.211305  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211310  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211313  113146 command_runner.go:130] >       },
	I1028 12:26:54.211319  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211323  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211328  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211332  113146 command_runner.go:130] >     },
	I1028 12:26:54.211335  113146 command_runner.go:130] >     {
	I1028 12:26:54.211341  113146 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 12:26:54.211347  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211352  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 12:26:54.211358  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211361  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211380  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 12:26:54.211390  113146 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 12:26:54.211394  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211402  113146 command_runner.go:130] >       "size": "92783513",
	I1028 12:26:54.211408  113146 command_runner.go:130] >       "uid": null,
	I1028 12:26:54.211412  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211416  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211420  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211423  113146 command_runner.go:130] >     },
	I1028 12:26:54.211427  113146 command_runner.go:130] >     {
	I1028 12:26:54.211432  113146 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 12:26:54.211438  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211443  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 12:26:54.211449  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211452  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211459  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 12:26:54.211467  113146 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 12:26:54.211471  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211475  113146 command_runner.go:130] >       "size": "68457798",
	I1028 12:26:54.211478  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211482  113146 command_runner.go:130] >         "value": "0"
	I1028 12:26:54.211486  113146 command_runner.go:130] >       },
	I1028 12:26:54.211490  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211494  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211498  113146 command_runner.go:130] >       "pinned": false
	I1028 12:26:54.211501  113146 command_runner.go:130] >     },
	I1028 12:26:54.211505  113146 command_runner.go:130] >     {
	I1028 12:26:54.211511  113146 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 12:26:54.211516  113146 command_runner.go:130] >       "repoTags": [
	I1028 12:26:54.211521  113146 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 12:26:54.211525  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211529  113146 command_runner.go:130] >       "repoDigests": [
	I1028 12:26:54.211536  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 12:26:54.211545  113146 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 12:26:54.211548  113146 command_runner.go:130] >       ],
	I1028 12:26:54.211552  113146 command_runner.go:130] >       "size": "742080",
	I1028 12:26:54.211561  113146 command_runner.go:130] >       "uid": {
	I1028 12:26:54.211567  113146 command_runner.go:130] >         "value": "65535"
	I1028 12:26:54.211571  113146 command_runner.go:130] >       },
	I1028 12:26:54.211575  113146 command_runner.go:130] >       "username": "",
	I1028 12:26:54.211578  113146 command_runner.go:130] >       "spec": null,
	I1028 12:26:54.211582  113146 command_runner.go:130] >       "pinned": true
	I1028 12:26:54.211586  113146 command_runner.go:130] >     }
	I1028 12:26:54.211589  113146 command_runner.go:130] >   ]
	I1028 12:26:54.211595  113146 command_runner.go:130] > }
	I1028 12:26:54.211735  113146 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:26:54.211747  113146 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:26:54.211755  113146 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.31.2 crio true true} ...
	I1028 12:26:54.211863  113146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-363277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:26:54.211935  113146 ssh_runner.go:195] Run: crio config
	I1028 12:26:54.257183  113146 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 12:26:54.257230  113146 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 12:26:54.257243  113146 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 12:26:54.257249  113146 command_runner.go:130] > #
	I1028 12:26:54.257257  113146 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 12:26:54.257264  113146 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 12:26:54.257273  113146 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 12:26:54.257288  113146 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 12:26:54.257301  113146 command_runner.go:130] > # reload'.
	I1028 12:26:54.257312  113146 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 12:26:54.257327  113146 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 12:26:54.257338  113146 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 12:26:54.257348  113146 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 12:26:54.257359  113146 command_runner.go:130] > [crio]
	I1028 12:26:54.257369  113146 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 12:26:54.257384  113146 command_runner.go:130] > # containers images, in this directory.
	I1028 12:26:54.257394  113146 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 12:26:54.257411  113146 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 12:26:54.257425  113146 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 12:26:54.257441  113146 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 12:26:54.257519  113146 command_runner.go:130] > # imagestore = ""
	I1028 12:26:54.257537  113146 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 12:26:54.257543  113146 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 12:26:54.257594  113146 command_runner.go:130] > storage_driver = "overlay"
	I1028 12:26:54.257611  113146 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 12:26:54.257625  113146 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 12:26:54.257635  113146 command_runner.go:130] > storage_option = [
	I1028 12:26:54.257728  113146 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 12:26:54.257751  113146 command_runner.go:130] > ]
	I1028 12:26:54.257768  113146 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 12:26:54.257782  113146 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 12:26:54.258037  113146 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 12:26:54.258050  113146 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 12:26:54.258056  113146 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 12:26:54.258060  113146 command_runner.go:130] > # always happen on a node reboot
	I1028 12:26:54.258246  113146 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 12:26:54.258275  113146 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 12:26:54.258285  113146 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 12:26:54.258290  113146 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 12:26:54.258365  113146 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 12:26:54.258384  113146 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 12:26:54.258396  113146 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 12:26:54.258614  113146 command_runner.go:130] > # internal_wipe = true
	I1028 12:26:54.258635  113146 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 12:26:54.258646  113146 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 12:26:54.258658  113146 command_runner.go:130] > # internal_repair = false
	I1028 12:26:54.258674  113146 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 12:26:54.258689  113146 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 12:26:54.258702  113146 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 12:26:54.258833  113146 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 12:26:54.258853  113146 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 12:26:54.258860  113146 command_runner.go:130] > [crio.api]
	I1028 12:26:54.258869  113146 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 12:26:54.259052  113146 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 12:26:54.259065  113146 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 12:26:54.259336  113146 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 12:26:54.259356  113146 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 12:26:54.259364  113146 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 12:26:54.259555  113146 command_runner.go:130] > # stream_port = "0"
	I1028 12:26:54.259567  113146 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 12:26:54.259785  113146 command_runner.go:130] > # stream_enable_tls = false
	I1028 12:26:54.259802  113146 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 12:26:54.260003  113146 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 12:26:54.260042  113146 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 12:26:54.260056  113146 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 12:26:54.260067  113146 command_runner.go:130] > # minutes.
	I1028 12:26:54.260243  113146 command_runner.go:130] > # stream_tls_cert = ""
	I1028 12:26:54.260261  113146 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 12:26:54.260271  113146 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 12:26:54.260396  113146 command_runner.go:130] > # stream_tls_key = ""
	I1028 12:26:54.260414  113146 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 12:26:54.260427  113146 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 12:26:54.260464  113146 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 12:26:54.260538  113146 command_runner.go:130] > # stream_tls_ca = ""
	I1028 12:26:54.260557  113146 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 12:26:54.260625  113146 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 12:26:54.260639  113146 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 12:26:54.260738  113146 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 12:26:54.260753  113146 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 12:26:54.260763  113146 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 12:26:54.260773  113146 command_runner.go:130] > [crio.runtime]
	I1028 12:26:54.260784  113146 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 12:26:54.260795  113146 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 12:26:54.260803  113146 command_runner.go:130] > # "nofile=1024:2048"
	I1028 12:26:54.260816  113146 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 12:26:54.260908  113146 command_runner.go:130] > # default_ulimits = [
	I1028 12:26:54.261001  113146 command_runner.go:130] > # ]
	I1028 12:26:54.261023  113146 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 12:26:54.261214  113146 command_runner.go:130] > # no_pivot = false
	I1028 12:26:54.261224  113146 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 12:26:54.261230  113146 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 12:26:54.261435  113146 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 12:26:54.261448  113146 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 12:26:54.261456  113146 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 12:26:54.261468  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 12:26:54.261556  113146 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 12:26:54.261565  113146 command_runner.go:130] > # Cgroup setting for conmon
	I1028 12:26:54.261577  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 12:26:54.261698  113146 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 12:26:54.261719  113146 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 12:26:54.261728  113146 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 12:26:54.261745  113146 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 12:26:54.261754  113146 command_runner.go:130] > conmon_env = [
	I1028 12:26:54.261764  113146 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 12:26:54.261931  113146 command_runner.go:130] > ]
	I1028 12:26:54.261943  113146 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 12:26:54.261951  113146 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 12:26:54.261961  113146 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 12:26:54.261992  113146 command_runner.go:130] > # default_env = [
	I1028 12:26:54.262100  113146 command_runner.go:130] > # ]
	I1028 12:26:54.262113  113146 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 12:26:54.262125  113146 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 12:26:54.262339  113146 command_runner.go:130] > # selinux = false
	I1028 12:26:54.262355  113146 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 12:26:54.262366  113146 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 12:26:54.262378  113146 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 12:26:54.262494  113146 command_runner.go:130] > # seccomp_profile = ""
	I1028 12:26:54.262505  113146 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 12:26:54.262511  113146 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 12:26:54.262517  113146 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 12:26:54.262521  113146 command_runner.go:130] > # which might increase security.
	I1028 12:26:54.262526  113146 command_runner.go:130] > # This option is currently deprecated,
	I1028 12:26:54.262533  113146 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 12:26:54.262634  113146 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 12:26:54.262645  113146 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 12:26:54.262651  113146 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 12:26:54.262659  113146 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 12:26:54.262665  113146 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 12:26:54.262672  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.262893  113146 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 12:26:54.262908  113146 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 12:26:54.262915  113146 command_runner.go:130] > # the cgroup blockio controller.
	I1028 12:26:54.262949  113146 command_runner.go:130] > # blockio_config_file = ""
	I1028 12:26:54.262964  113146 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 12:26:54.262973  113146 command_runner.go:130] > # blockio parameters.
	I1028 12:26:54.263183  113146 command_runner.go:130] > # blockio_reload = false
	I1028 12:26:54.263199  113146 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 12:26:54.263206  113146 command_runner.go:130] > # irqbalance daemon.
	I1028 12:26:54.263404  113146 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 12:26:54.263419  113146 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 12:26:54.263441  113146 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 12:26:54.263456  113146 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 12:26:54.263662  113146 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 12:26:54.263684  113146 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 12:26:54.263694  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.264360  113146 command_runner.go:130] > # rdt_config_file = ""
	I1028 12:26:54.264373  113146 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 12:26:54.264378  113146 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 12:26:54.264414  113146 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 12:26:54.264425  113146 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 12:26:54.264434  113146 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 12:26:54.264443  113146 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 12:26:54.264452  113146 command_runner.go:130] > # will be added.
	I1028 12:26:54.264458  113146 command_runner.go:130] > # default_capabilities = [
	I1028 12:26:54.264465  113146 command_runner.go:130] > # 	"CHOWN",
	I1028 12:26:54.264471  113146 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 12:26:54.264477  113146 command_runner.go:130] > # 	"FSETID",
	I1028 12:26:54.264487  113146 command_runner.go:130] > # 	"FOWNER",
	I1028 12:26:54.264493  113146 command_runner.go:130] > # 	"SETGID",
	I1028 12:26:54.264502  113146 command_runner.go:130] > # 	"SETUID",
	I1028 12:26:54.264510  113146 command_runner.go:130] > # 	"SETPCAP",
	I1028 12:26:54.264514  113146 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 12:26:54.264520  113146 command_runner.go:130] > # 	"KILL",
	I1028 12:26:54.264524  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264532  113146 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 12:26:54.264539  113146 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 12:26:54.264546  113146 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 12:26:54.264558  113146 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 12:26:54.264570  113146 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 12:26:54.264578  113146 command_runner.go:130] > default_sysctls = [
	I1028 12:26:54.264586  113146 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 12:26:54.264594  113146 command_runner.go:130] > ]
	I1028 12:26:54.264602  113146 command_runner.go:130] > # List of devices on the host that a
	I1028 12:26:54.264615  113146 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 12:26:54.264624  113146 command_runner.go:130] > # allowed_devices = [
	I1028 12:26:54.264631  113146 command_runner.go:130] > # 	"/dev/fuse",
	I1028 12:26:54.264651  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264663  113146 command_runner.go:130] > # List of additional devices. specified as
	I1028 12:26:54.264686  113146 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 12:26:54.264697  113146 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 12:26:54.264706  113146 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 12:26:54.264715  113146 command_runner.go:130] > # additional_devices = [
	I1028 12:26:54.264721  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264732  113146 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 12:26:54.264739  113146 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 12:26:54.264748  113146 command_runner.go:130] > # 	"/etc/cdi",
	I1028 12:26:54.264755  113146 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 12:26:54.264763  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264773  113146 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 12:26:54.264788  113146 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 12:26:54.264797  113146 command_runner.go:130] > # Defaults to false.
	I1028 12:26:54.264801  113146 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 12:26:54.264813  113146 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 12:26:54.264825  113146 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 12:26:54.264831  113146 command_runner.go:130] > # hooks_dir = [
	I1028 12:26:54.264842  113146 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 12:26:54.264850  113146 command_runner.go:130] > # ]
	I1028 12:26:54.264863  113146 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 12:26:54.264873  113146 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 12:26:54.264883  113146 command_runner.go:130] > # its default mounts from the following two files:
	I1028 12:26:54.264889  113146 command_runner.go:130] > #
	I1028 12:26:54.264900  113146 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 12:26:54.264912  113146 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 12:26:54.264923  113146 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 12:26:54.264932  113146 command_runner.go:130] > #
	I1028 12:26:54.264942  113146 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 12:26:54.264954  113146 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 12:26:54.264967  113146 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 12:26:54.264977  113146 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 12:26:54.264992  113146 command_runner.go:130] > #
	I1028 12:26:54.265003  113146 command_runner.go:130] > # default_mounts_file = ""
	I1028 12:26:54.265011  113146 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 12:26:54.265026  113146 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 12:26:54.265035  113146 command_runner.go:130] > pids_limit = 1024
	I1028 12:26:54.265045  113146 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 12:26:54.265057  113146 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 12:26:54.265069  113146 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 12:26:54.265084  113146 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 12:26:54.265093  113146 command_runner.go:130] > # log_size_max = -1
	I1028 12:26:54.265104  113146 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 12:26:54.265115  113146 command_runner.go:130] > # log_to_journald = false
	I1028 12:26:54.265124  113146 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 12:26:54.265135  113146 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 12:26:54.265146  113146 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 12:26:54.265157  113146 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 12:26:54.265167  113146 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 12:26:54.265176  113146 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 12:26:54.265187  113146 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 12:26:54.265194  113146 command_runner.go:130] > # read_only = false
	I1028 12:26:54.265206  113146 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 12:26:54.265218  113146 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 12:26:54.265228  113146 command_runner.go:130] > # live configuration reload.
	I1028 12:26:54.265235  113146 command_runner.go:130] > # log_level = "info"
	I1028 12:26:54.265246  113146 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 12:26:54.265256  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.265265  113146 command_runner.go:130] > # log_filter = ""
	I1028 12:26:54.265274  113146 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 12:26:54.265286  113146 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 12:26:54.265295  113146 command_runner.go:130] > # separated by comma.
	I1028 12:26:54.265313  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265323  113146 command_runner.go:130] > # uid_mappings = ""
	I1028 12:26:54.265332  113146 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 12:26:54.265351  113146 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 12:26:54.265361  113146 command_runner.go:130] > # separated by comma.
	I1028 12:26:54.265373  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265382  113146 command_runner.go:130] > # gid_mappings = ""
	I1028 12:26:54.265392  113146 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 12:26:54.265401  113146 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 12:26:54.265410  113146 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 12:26:54.265419  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265425  113146 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 12:26:54.265431  113146 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 12:26:54.265439  113146 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 12:26:54.265446  113146 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 12:26:54.265453  113146 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 12:26:54.265459  113146 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 12:26:54.265465  113146 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 12:26:54.265472  113146 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 12:26:54.265478  113146 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 12:26:54.265487  113146 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 12:26:54.265496  113146 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 12:26:54.265508  113146 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 12:26:54.265518  113146 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 12:26:54.265526  113146 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 12:26:54.265543  113146 command_runner.go:130] > drop_infra_ctr = false
	I1028 12:26:54.265555  113146 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 12:26:54.265567  113146 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 12:26:54.265581  113146 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 12:26:54.265590  113146 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 12:26:54.265601  113146 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 12:26:54.265614  113146 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 12:26:54.265626  113146 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 12:26:54.265641  113146 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 12:26:54.265650  113146 command_runner.go:130] > # shared_cpuset = ""
	I1028 12:26:54.265660  113146 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 12:26:54.265677  113146 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 12:26:54.265685  113146 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 12:26:54.265695  113146 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 12:26:54.265704  113146 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 12:26:54.265716  113146 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 12:26:54.265731  113146 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 12:26:54.265741  113146 command_runner.go:130] > # enable_criu_support = false
	I1028 12:26:54.265752  113146 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 12:26:54.265765  113146 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 12:26:54.265775  113146 command_runner.go:130] > # enable_pod_events = false
	I1028 12:26:54.265784  113146 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 12:26:54.265795  113146 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 12:26:54.265807  113146 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 12:26:54.265816  113146 command_runner.go:130] > # default_runtime = "runc"
	I1028 12:26:54.265828  113146 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 12:26:54.265842  113146 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 12:26:54.265858  113146 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 12:26:54.265867  113146 command_runner.go:130] > # creation as a file is not desired either.
	I1028 12:26:54.265875  113146 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 12:26:54.265885  113146 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 12:26:54.265895  113146 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 12:26:54.265903  113146 command_runner.go:130] > # ]
	I1028 12:26:54.265914  113146 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 12:26:54.265927  113146 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 12:26:54.265939  113146 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 12:26:54.265949  113146 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 12:26:54.265957  113146 command_runner.go:130] > #
	I1028 12:26:54.265967  113146 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 12:26:54.265975  113146 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 12:26:54.266029  113146 command_runner.go:130] > # runtime_type = "oci"
	I1028 12:26:54.266042  113146 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 12:26:54.266050  113146 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 12:26:54.266061  113146 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 12:26:54.266078  113146 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 12:26:54.266088  113146 command_runner.go:130] > # monitor_env = []
	I1028 12:26:54.266098  113146 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 12:26:54.266107  113146 command_runner.go:130] > # allowed_annotations = []
	I1028 12:26:54.266117  113146 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 12:26:54.266123  113146 command_runner.go:130] > # Where:
	I1028 12:26:54.266132  113146 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 12:26:54.266144  113146 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 12:26:54.266157  113146 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 12:26:54.266169  113146 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 12:26:54.266177  113146 command_runner.go:130] > #   in $PATH.
	I1028 12:26:54.266190  113146 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 12:26:54.266200  113146 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 12:26:54.266209  113146 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 12:26:54.266216  113146 command_runner.go:130] > #   state.
	I1028 12:26:54.266229  113146 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 12:26:54.266241  113146 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 12:26:54.266253  113146 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 12:26:54.266264  113146 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 12:26:54.266275  113146 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 12:26:54.266288  113146 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 12:26:54.266295  113146 command_runner.go:130] > #   The currently recognized values are:
	I1028 12:26:54.266305  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 12:26:54.266318  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 12:26:54.266330  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 12:26:54.266341  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 12:26:54.266356  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 12:26:54.266368  113146 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 12:26:54.266380  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 12:26:54.266388  113146 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 12:26:54.266405  113146 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 12:26:54.266417  113146 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 12:26:54.266427  113146 command_runner.go:130] > #   deprecated option "conmon".
	I1028 12:26:54.266447  113146 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 12:26:54.266458  113146 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 12:26:54.266470  113146 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 12:26:54.266480  113146 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 12:26:54.266490  113146 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 12:26:54.266500  113146 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 12:26:54.266514  113146 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 12:26:54.266525  113146 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 12:26:54.266533  113146 command_runner.go:130] > #
	I1028 12:26:54.266544  113146 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 12:26:54.266552  113146 command_runner.go:130] > #
	I1028 12:26:54.266561  113146 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 12:26:54.266573  113146 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 12:26:54.266580  113146 command_runner.go:130] > #
	I1028 12:26:54.266586  113146 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 12:26:54.266598  113146 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 12:26:54.266608  113146 command_runner.go:130] > #
	I1028 12:26:54.266618  113146 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 12:26:54.266627  113146 command_runner.go:130] > # feature.
	I1028 12:26:54.266635  113146 command_runner.go:130] > #
	I1028 12:26:54.266648  113146 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 12:26:54.266661  113146 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 12:26:54.266673  113146 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 12:26:54.266682  113146 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 12:26:54.266694  113146 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 12:26:54.266703  113146 command_runner.go:130] > #
	I1028 12:26:54.266712  113146 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 12:26:54.266725  113146 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 12:26:54.266733  113146 command_runner.go:130] > #
	I1028 12:26:54.266743  113146 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 12:26:54.266755  113146 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 12:26:54.266763  113146 command_runner.go:130] > #
	I1028 12:26:54.266772  113146 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 12:26:54.266789  113146 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 12:26:54.266797  113146 command_runner.go:130] > # limitation.
	I1028 12:26:54.266805  113146 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 12:26:54.266815  113146 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 12:26:54.266825  113146 command_runner.go:130] > runtime_type = "oci"
	I1028 12:26:54.266832  113146 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 12:26:54.266842  113146 command_runner.go:130] > runtime_config_path = ""
	I1028 12:26:54.266849  113146 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 12:26:54.266858  113146 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 12:26:54.266864  113146 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 12:26:54.266871  113146 command_runner.go:130] > monitor_env = [
	I1028 12:26:54.266881  113146 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 12:26:54.266888  113146 command_runner.go:130] > ]
	I1028 12:26:54.266896  113146 command_runner.go:130] > privileged_without_host_devices = false
	I1028 12:26:54.266909  113146 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 12:26:54.266920  113146 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 12:26:54.266931  113146 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 12:26:54.266945  113146 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 12:26:54.266957  113146 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 12:26:54.266968  113146 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 12:26:54.266985  113146 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 12:26:54.267001  113146 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 12:26:54.267012  113146 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 12:26:54.267026  113146 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 12:26:54.267034  113146 command_runner.go:130] > # Example:
	I1028 12:26:54.267041  113146 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 12:26:54.267048  113146 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 12:26:54.267058  113146 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 12:26:54.267069  113146 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 12:26:54.267078  113146 command_runner.go:130] > # cpuset = 0
	I1028 12:26:54.267087  113146 command_runner.go:130] > # cpushares = "0-1"
	I1028 12:26:54.267093  113146 command_runner.go:130] > # Where:
	I1028 12:26:54.267103  113146 command_runner.go:130] > # The workload name is workload-type.
	I1028 12:26:54.267122  113146 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 12:26:54.267132  113146 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 12:26:54.267144  113146 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 12:26:54.267159  113146 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 12:26:54.267171  113146 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 12:26:54.267182  113146 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 12:26:54.267194  113146 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 12:26:54.267204  113146 command_runner.go:130] > # Default value is set to true
	I1028 12:26:54.267211  113146 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 12:26:54.267217  113146 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 12:26:54.267228  113146 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 12:26:54.267238  113146 command_runner.go:130] > # Default value is set to 'false'
	I1028 12:26:54.267248  113146 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 12:26:54.267261  113146 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 12:26:54.267268  113146 command_runner.go:130] > #
	I1028 12:26:54.267277  113146 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 12:26:54.267288  113146 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 12:26:54.267295  113146 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 12:26:54.267305  113146 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 12:26:54.267313  113146 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 12:26:54.267319  113146 command_runner.go:130] > [crio.image]
	I1028 12:26:54.267328  113146 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 12:26:54.267335  113146 command_runner.go:130] > # default_transport = "docker://"
	I1028 12:26:54.267351  113146 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 12:26:54.267361  113146 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 12:26:54.267368  113146 command_runner.go:130] > # global_auth_file = ""
	I1028 12:26:54.267378  113146 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 12:26:54.267386  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.267393  113146 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 12:26:54.267406  113146 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 12:26:54.267419  113146 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 12:26:54.267430  113146 command_runner.go:130] > # This option supports live configuration reload.
	I1028 12:26:54.267443  113146 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 12:26:54.267461  113146 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 12:26:54.267470  113146 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 12:26:54.267481  113146 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 12:26:54.267494  113146 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 12:26:54.267503  113146 command_runner.go:130] > # pause_command = "/pause"
	I1028 12:26:54.267513  113146 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 12:26:54.267525  113146 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 12:26:54.267536  113146 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 12:26:54.267548  113146 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 12:26:54.267556  113146 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 12:26:54.267567  113146 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 12:26:54.267578  113146 command_runner.go:130] > # pinned_images = [
	I1028 12:26:54.267583  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267595  113146 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 12:26:54.267608  113146 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 12:26:54.267621  113146 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 12:26:54.267651  113146 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 12:26:54.267666  113146 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 12:26:54.267676  113146 command_runner.go:130] > # signature_policy = ""
	I1028 12:26:54.267686  113146 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 12:26:54.267699  113146 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 12:26:54.267709  113146 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 12:26:54.267718  113146 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 12:26:54.267729  113146 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 12:26:54.267740  113146 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 12:26:54.267752  113146 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 12:26:54.267765  113146 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 12:26:54.267774  113146 command_runner.go:130] > # changing them here.
	I1028 12:26:54.267784  113146 command_runner.go:130] > # insecure_registries = [
	I1028 12:26:54.267792  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267798  113146 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 12:26:54.267807  113146 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 12:26:54.267817  113146 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 12:26:54.267837  113146 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 12:26:54.267847  113146 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 12:26:54.267859  113146 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 12:26:54.267867  113146 command_runner.go:130] > # CNI plugins.
	I1028 12:26:54.267873  113146 command_runner.go:130] > [crio.network]
	I1028 12:26:54.267882  113146 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 12:26:54.267892  113146 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 12:26:54.267901  113146 command_runner.go:130] > # cni_default_network = ""
	I1028 12:26:54.267915  113146 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 12:26:54.267925  113146 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 12:26:54.267936  113146 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 12:26:54.267945  113146 command_runner.go:130] > # plugin_dirs = [
	I1028 12:26:54.267954  113146 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 12:26:54.267959  113146 command_runner.go:130] > # ]
	I1028 12:26:54.267969  113146 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 12:26:54.267975  113146 command_runner.go:130] > [crio.metrics]
	I1028 12:26:54.267982  113146 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 12:26:54.267992  113146 command_runner.go:130] > enable_metrics = true
	I1028 12:26:54.268003  113146 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 12:26:54.268013  113146 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 12:26:54.268026  113146 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 12:26:54.268037  113146 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 12:26:54.268049  113146 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 12:26:54.268056  113146 command_runner.go:130] > # metrics_collectors = [
	I1028 12:26:54.268059  113146 command_runner.go:130] > # 	"operations",
	I1028 12:26:54.268069  113146 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 12:26:54.268079  113146 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 12:26:54.268089  113146 command_runner.go:130] > # 	"operations_errors",
	I1028 12:26:54.268098  113146 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 12:26:54.268108  113146 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 12:26:54.268117  113146 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 12:26:54.268125  113146 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 12:26:54.268134  113146 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 12:26:54.268147  113146 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 12:26:54.268158  113146 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 12:26:54.268169  113146 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 12:26:54.268180  113146 command_runner.go:130] > # 	"containers_oom_total",
	I1028 12:26:54.268188  113146 command_runner.go:130] > # 	"containers_oom",
	I1028 12:26:54.268197  113146 command_runner.go:130] > # 	"processes_defunct",
	I1028 12:26:54.268207  113146 command_runner.go:130] > # 	"operations_total",
	I1028 12:26:54.268216  113146 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 12:26:54.268224  113146 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 12:26:54.268230  113146 command_runner.go:130] > # 	"operations_errors_total",
	I1028 12:26:54.268243  113146 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 12:26:54.268254  113146 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 12:26:54.268264  113146 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 12:26:54.268273  113146 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 12:26:54.268282  113146 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 12:26:54.268292  113146 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 12:26:54.268302  113146 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 12:26:54.268310  113146 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 12:26:54.268314  113146 command_runner.go:130] > # ]
	I1028 12:26:54.268324  113146 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 12:26:54.268333  113146 command_runner.go:130] > # metrics_port = 9090
	I1028 12:26:54.268345  113146 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 12:26:54.268354  113146 command_runner.go:130] > # metrics_socket = ""
	I1028 12:26:54.268362  113146 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 12:26:54.268374  113146 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 12:26:54.268386  113146 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 12:26:54.268394  113146 command_runner.go:130] > # certificate on any modification event.
	I1028 12:26:54.268401  113146 command_runner.go:130] > # metrics_cert = ""
	I1028 12:26:54.268410  113146 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 12:26:54.268421  113146 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 12:26:54.268430  113146 command_runner.go:130] > # metrics_key = ""
	I1028 12:26:54.268442  113146 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 12:26:54.268452  113146 command_runner.go:130] > [crio.tracing]
	I1028 12:26:54.268469  113146 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 12:26:54.268477  113146 command_runner.go:130] > # enable_tracing = false
	I1028 12:26:54.268485  113146 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 12:26:54.268491  113146 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 12:26:54.268504  113146 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 12:26:54.268515  113146 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 12:26:54.268524  113146 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 12:26:54.268530  113146 command_runner.go:130] > [crio.nri]
	I1028 12:26:54.268540  113146 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 12:26:54.268549  113146 command_runner.go:130] > # enable_nri = false
	I1028 12:26:54.268559  113146 command_runner.go:130] > # NRI socket to listen on.
	I1028 12:26:54.268567  113146 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 12:26:54.268573  113146 command_runner.go:130] > # NRI plugin directory to use.
	I1028 12:26:54.268580  113146 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 12:26:54.268591  113146 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 12:26:54.268603  113146 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 12:26:54.268614  113146 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 12:26:54.268624  113146 command_runner.go:130] > # nri_disable_connections = false
	I1028 12:26:54.268632  113146 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 12:26:54.268646  113146 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 12:26:54.268654  113146 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 12:26:54.268659  113146 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 12:26:54.268665  113146 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 12:26:54.268670  113146 command_runner.go:130] > [crio.stats]
	I1028 12:26:54.268676  113146 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 12:26:54.268686  113146 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 12:26:54.268697  113146 command_runner.go:130] > # stats_collection_period = 0
	I1028 12:26:54.268744  113146 command_runner.go:130] ! time="2024-10-28 12:26:54.218181467Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 12:26:54.268772  113146 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 12:26:54.268878  113146 cni.go:84] Creating CNI manager for ""
	I1028 12:26:54.268894  113146 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 12:26:54.268907  113146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:26:54.268931  113146 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-363277 NodeName:multinode-363277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:26:54.269074  113146 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-363277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:26:54.269145  113146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:26:54.278934  113146 command_runner.go:130] > kubeadm
	I1028 12:26:54.278947  113146 command_runner.go:130] > kubectl
	I1028 12:26:54.278951  113146 command_runner.go:130] > kubelet
	I1028 12:26:54.279100  113146 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:26:54.279154  113146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:26:54.287998  113146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 12:26:54.305540  113146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:26:54.321789  113146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 12:26:54.339762  113146 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1028 12:26:54.343602  113146 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I1028 12:26:54.343840  113146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:26:54.488413  113146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:26:54.502072  113146 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277 for IP: 192.168.39.174
	I1028 12:26:54.502100  113146 certs.go:194] generating shared ca certs ...
	I1028 12:26:54.502137  113146 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:26:54.502336  113146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:26:54.502401  113146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:26:54.502409  113146 certs.go:256] generating profile certs ...
	I1028 12:26:54.502491  113146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/client.key
	I1028 12:26:54.502547  113146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key.b804b213
	I1028 12:26:54.502584  113146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key
	I1028 12:26:54.502597  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 12:26:54.502610  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 12:26:54.502628  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 12:26:54.502638  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 12:26:54.502648  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 12:26:54.502659  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 12:26:54.502678  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 12:26:54.502693  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 12:26:54.502739  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:26:54.502764  113146 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:26:54.502776  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:26:54.502815  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:26:54.502857  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:26:54.502884  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:26:54.502931  113146 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:26:54.502957  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem -> /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.502970  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.502982  113146 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.503565  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:26:54.528654  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:26:54.549978  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:26:54.571092  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:26:54.593005  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:26:54.615090  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:26:54.635764  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:26:54.656553  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/multinode-363277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:26:54.677848  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:26:54.699019  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:26:54.721786  113146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:26:54.743434  113146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:26:54.758271  113146 ssh_runner.go:195] Run: openssl version
	I1028 12:26:54.763518  113146 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 12:26:54.763603  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:26:54.773016  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777047  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777074  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.777107  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:26:54.782229  113146 command_runner.go:130] > 51391683
	I1028 12:26:54.782291  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:26:54.790771  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:26:54.801042  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805011  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805041  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.805084  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:26:54.810151  113146 command_runner.go:130] > 3ec20f2e
	I1028 12:26:54.810223  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:26:54.818484  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:26:54.827906  113146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831689  113146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831712  113146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.831740  113146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:26:54.836745  113146 command_runner.go:130] > b5213941
	I1028 12:26:54.836800  113146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:26:54.844944  113146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:26:54.848761  113146 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:26:54.848780  113146 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 12:26:54.848786  113146 command_runner.go:130] > Device: 253,1	Inode: 6291502     Links: 1
	I1028 12:26:54.848793  113146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 12:26:54.848799  113146 command_runner.go:130] > Access: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848804  113146 command_runner.go:130] > Modify: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848809  113146 command_runner.go:130] > Change: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848813  113146 command_runner.go:130] >  Birth: 2024-10-28 12:20:14.923822681 +0000
	I1028 12:26:54.848857  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:26:54.854031  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.854097  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:26:54.859173  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.859364  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:26:54.864309  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.864357  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:26:54.869173  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.869225  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:26:54.873907  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.874149  113146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:26:54.879331  113146 command_runner.go:130] > Certificate will not expire
	I1028 12:26:54.879389  113146 kubeadm.go:392] StartCluster: {Name:multinode-363277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-363277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.242 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:26:54.879492  113146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:26:54.879544  113146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:26:54.911765  113146 command_runner.go:130] > b03df03ee35acaf8c99f2c47a6678ed30444be7f72058e32adb573b7b6544dd3
	I1028 12:26:54.911816  113146 command_runner.go:130] > b4d0124a73f58166690efbc377a141df990cc4edac70890411b1a0e278a3c374
	I1028 12:26:54.911825  113146 command_runner.go:130] > 1d507030ea898557fee2b19376e88b69ae364b5aeeec9fb7555f6a6e040cf447
	I1028 12:26:54.911833  113146 command_runner.go:130] > ffa26b10a3810791a68c757fbe3481291d2e771ac8fcf67a662cc067572e7132
	I1028 12:26:54.911838  113146 command_runner.go:130] > 7cecd815f01756107482ffad4e85dc0db4c2b4ef09a12d0b056b5c368d487c59
	I1028 12:26:54.911844  113146 command_runner.go:130] > 6bb5157fc0fd9e1ef085e28966d2d297ccad22275908cd65958962a7cf675b4f
	I1028 12:26:54.911849  113146 command_runner.go:130] > 1d570edc04e5aa175f4a56b27634b7e47b995768bae965e2814c6fb9d95a9969
	I1028 12:26:54.911862  113146 command_runner.go:130] > dc179a1c5110656277e56e9c5310384a548e8c498c63ea4c8582e983c3a50328
	I1028 12:26:54.913358  113146 cri.go:89] found id: "b03df03ee35acaf8c99f2c47a6678ed30444be7f72058e32adb573b7b6544dd3"
	I1028 12:26:54.913373  113146 cri.go:89] found id: "b4d0124a73f58166690efbc377a141df990cc4edac70890411b1a0e278a3c374"
	I1028 12:26:54.913378  113146 cri.go:89] found id: "1d507030ea898557fee2b19376e88b69ae364b5aeeec9fb7555f6a6e040cf447"
	I1028 12:26:54.913381  113146 cri.go:89] found id: "ffa26b10a3810791a68c757fbe3481291d2e771ac8fcf67a662cc067572e7132"
	I1028 12:26:54.913384  113146 cri.go:89] found id: "7cecd815f01756107482ffad4e85dc0db4c2b4ef09a12d0b056b5c368d487c59"
	I1028 12:26:54.913387  113146 cri.go:89] found id: "6bb5157fc0fd9e1ef085e28966d2d297ccad22275908cd65958962a7cf675b4f"
	I1028 12:26:54.913390  113146 cri.go:89] found id: "1d570edc04e5aa175f4a56b27634b7e47b995768bae965e2814c6fb9d95a9969"
	I1028 12:26:54.913392  113146 cri.go:89] found id: "dc179a1c5110656277e56e9c5310384a548e8c498c63ea4c8582e983c3a50328"
	I1028 12:26:54.913394  113146 cri.go:89] found id: ""
	I1028 12:26:54.913437  113146 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-363277 -n multinode-363277
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-363277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.11s)

                                                
                                    
x
+
TestPreload (270.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-490398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-490398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.521122316s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-490398 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-490398 image pull gcr.io/k8s-minikube/busybox: (2.365964778s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-490398
E1028 12:37:13.450868   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-490398: exit status 82 (2m0.457377873s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-490398"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-490398 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-28 12:39:01.495386254 +0000 UTC m=+3723.060957685
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-490398 -n test-preload-490398
E1028 12:39:03.445322   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-490398 -n test-preload-490398: exit status 3 (18.508883073s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:39:19.999960  117975 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host
	E1028 12:39:20.000007  117975 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-490398" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-490398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-490398
E1028 12:39:20.376248   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestPreload (270.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (1173.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m54.318300865s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-868919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-868919" primary control-plane node in "kubernetes-upgrade-868919" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:45:12.842303  125024 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:45:12.842561  125024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:45:12.842570  125024 out.go:358] Setting ErrFile to fd 2...
	I1028 12:45:12.842574  125024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:45:12.842782  125024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:45:12.843328  125024 out.go:352] Setting JSON to false
	I1028 12:45:12.844301  125024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8863,"bootTime":1730110650,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:45:12.844363  125024 start.go:139] virtualization: kvm guest
	I1028 12:45:12.846945  125024 out.go:177] * [kubernetes-upgrade-868919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:45:12.848279  125024 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:45:12.848285  125024 notify.go:220] Checking for updates...
	I1028 12:45:12.850686  125024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:45:12.852067  125024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:45:12.853362  125024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:45:12.854497  125024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:45:12.855777  125024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:45:12.857393  125024 config.go:182] Loaded profile config "NoKubernetes-394868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1028 12:45:12.857532  125024 config.go:182] Loaded profile config "cert-expiration-717454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:45:12.857692  125024 config.go:182] Loaded profile config "cert-options-764199": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:45:12.857816  125024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:45:12.892868  125024 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:45:12.894158  125024 start.go:297] selected driver: kvm2
	I1028 12:45:12.894172  125024 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:45:12.894183  125024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:45:12.894866  125024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:45:12.894949  125024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:45:12.909271  125024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:45:12.909326  125024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:45:12.909608  125024 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 12:45:12.909643  125024 cni.go:84] Creating CNI manager for ""
	I1028 12:45:12.909699  125024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:45:12.909707  125024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:45:12.909764  125024 start.go:340] cluster config:
	{Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:45:12.909891  125024 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:45:12.911348  125024 out.go:177] * Starting "kubernetes-upgrade-868919" primary control-plane node in "kubernetes-upgrade-868919" cluster
	I1028 12:45:12.912434  125024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:45:12.912466  125024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:45:12.912476  125024 cache.go:56] Caching tarball of preloaded images
	I1028 12:45:12.912560  125024 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:45:12.912571  125024 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:45:12.912670  125024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/config.json ...
	I1028 12:45:12.912691  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/config.json: {Name:mk93f84a29ec3959aba5d7425493cc62b6cde5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:45:12.912839  125024 start.go:360] acquireMachinesLock for kubernetes-upgrade-868919: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:45:37.888098  125024 start.go:364] duration metric: took 24.975218563s to acquireMachinesLock for "kubernetes-upgrade-868919"
	I1028 12:45:37.888184  125024 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:45:37.888340  125024 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:45:37.890520  125024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:45:37.890700  125024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:45:37.890763  125024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:45:37.907360  125024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I1028 12:45:37.907733  125024 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:45:37.908365  125024 main.go:141] libmachine: Using API Version  1
	I1028 12:45:37.908390  125024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:45:37.908754  125024 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:45:37.908922  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:45:37.909096  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:45:37.909298  125024 start.go:159] libmachine.API.Create for "kubernetes-upgrade-868919" (driver="kvm2")
	I1028 12:45:37.909348  125024 client.go:168] LocalClient.Create starting
	I1028 12:45:37.909383  125024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 12:45:37.909428  125024 main.go:141] libmachine: Decoding PEM data...
	I1028 12:45:37.909453  125024 main.go:141] libmachine: Parsing certificate...
	I1028 12:45:37.909538  125024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 12:45:37.909566  125024 main.go:141] libmachine: Decoding PEM data...
	I1028 12:45:37.909584  125024 main.go:141] libmachine: Parsing certificate...
	I1028 12:45:37.909601  125024 main.go:141] libmachine: Running pre-create checks...
	I1028 12:45:37.909616  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .PreCreateCheck
	I1028 12:45:37.909935  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetConfigRaw
	I1028 12:45:37.910358  125024 main.go:141] libmachine: Creating machine...
	I1028 12:45:37.910376  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .Create
	I1028 12:45:37.910488  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Creating KVM machine...
	I1028 12:45:37.911621  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found existing default KVM network
	I1028 12:45:37.912850  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:37.912685  125411 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e7:f6:72} reservation:<nil>}
	I1028 12:45:37.913953  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:37.913855  125411 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:19:8e:47} reservation:<nil>}
	I1028 12:45:37.915094  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:37.915014  125411 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260e70}
	I1028 12:45:37.915146  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | created network xml: 
	I1028 12:45:37.915164  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | <network>
	I1028 12:45:37.915179  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   <name>mk-kubernetes-upgrade-868919</name>
	I1028 12:45:37.915192  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   <dns enable='no'/>
	I1028 12:45:37.915201  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   
	I1028 12:45:37.915211  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1028 12:45:37.915241  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |     <dhcp>
	I1028 12:45:37.915274  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1028 12:45:37.915289  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |     </dhcp>
	I1028 12:45:37.915297  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   </ip>
	I1028 12:45:37.915305  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG |   
	I1028 12:45:37.915315  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | </network>
	I1028 12:45:37.915326  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | 
	I1028 12:45:37.920385  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | trying to create private KVM network mk-kubernetes-upgrade-868919 192.168.61.0/24...
	I1028 12:45:37.989901  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | private KVM network mk-kubernetes-upgrade-868919 192.168.61.0/24 created
	I1028 12:45:37.989940  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919 ...
	I1028 12:45:37.989956  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:37.989877  125411 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:45:37.989974  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 12:45:37.990036  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:45:38.243172  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:38.243056  125411 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa...
	I1028 12:45:38.371114  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:38.370977  125411 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/kubernetes-upgrade-868919.rawdisk...
	I1028 12:45:38.371143  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Writing magic tar header
	I1028 12:45:38.371161  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Writing SSH key tar header
	I1028 12:45:38.371188  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:38.371133  125411 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919 ...
	I1028 12:45:38.371278  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919
	I1028 12:45:38.371306  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919 (perms=drwx------)
	I1028 12:45:38.371317  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 12:45:38.371331  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 12:45:38.371345  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 12:45:38.371435  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:45:38.371460  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 12:45:38.371486  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 12:45:38.371509  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 12:45:38.371522  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home/jenkins
	I1028 12:45:38.371532  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Checking permissions on dir: /home
	I1028 12:45:38.371542  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 12:45:38.371548  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Skipping /home - not owner
	I1028 12:45:38.371581  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 12:45:38.371592  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Creating domain...
	I1028 12:45:38.372837  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) define libvirt domain using xml: 
	I1028 12:45:38.372868  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) <domain type='kvm'>
	I1028 12:45:38.372879  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <name>kubernetes-upgrade-868919</name>
	I1028 12:45:38.372887  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <memory unit='MiB'>2200</memory>
	I1028 12:45:38.372896  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <vcpu>2</vcpu>
	I1028 12:45:38.372902  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <features>
	I1028 12:45:38.372911  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <acpi/>
	I1028 12:45:38.372930  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <apic/>
	I1028 12:45:38.372941  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <pae/>
	I1028 12:45:38.372951  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     
	I1028 12:45:38.372961  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   </features>
	I1028 12:45:38.372971  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <cpu mode='host-passthrough'>
	I1028 12:45:38.372980  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   
	I1028 12:45:38.372987  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   </cpu>
	I1028 12:45:38.372998  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <os>
	I1028 12:45:38.373007  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <type>hvm</type>
	I1028 12:45:38.373018  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <boot dev='cdrom'/>
	I1028 12:45:38.373028  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <boot dev='hd'/>
	I1028 12:45:38.373039  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <bootmenu enable='no'/>
	I1028 12:45:38.373049  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   </os>
	I1028 12:45:38.373060  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   <devices>
	I1028 12:45:38.373071  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <disk type='file' device='cdrom'>
	I1028 12:45:38.373102  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/boot2docker.iso'/>
	I1028 12:45:38.373113  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <target dev='hdc' bus='scsi'/>
	I1028 12:45:38.373122  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <readonly/>
	I1028 12:45:38.373128  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </disk>
	I1028 12:45:38.373137  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <disk type='file' device='disk'>
	I1028 12:45:38.373149  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 12:45:38.373169  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/kubernetes-upgrade-868919.rawdisk'/>
	I1028 12:45:38.373179  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <target dev='hda' bus='virtio'/>
	I1028 12:45:38.373190  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </disk>
	I1028 12:45:38.373200  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <interface type='network'>
	I1028 12:45:38.373213  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <source network='mk-kubernetes-upgrade-868919'/>
	I1028 12:45:38.373223  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <model type='virtio'/>
	I1028 12:45:38.373232  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </interface>
	I1028 12:45:38.373242  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <interface type='network'>
	I1028 12:45:38.373251  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <source network='default'/>
	I1028 12:45:38.373261  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <model type='virtio'/>
	I1028 12:45:38.373273  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </interface>
	I1028 12:45:38.373283  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <serial type='pty'>
	I1028 12:45:38.373294  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <target port='0'/>
	I1028 12:45:38.373304  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </serial>
	I1028 12:45:38.373313  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <console type='pty'>
	I1028 12:45:38.373323  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <target type='serial' port='0'/>
	I1028 12:45:38.373333  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </console>
	I1028 12:45:38.373341  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     <rng model='virtio'>
	I1028 12:45:38.373354  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)       <backend model='random'>/dev/random</backend>
	I1028 12:45:38.373364  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     </rng>
	I1028 12:45:38.373374  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     
	I1028 12:45:38.373383  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)     
	I1028 12:45:38.373391  125024 main.go:141] libmachine: (kubernetes-upgrade-868919)   </devices>
	I1028 12:45:38.373401  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) </domain>
	I1028 12:45:38.373411  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) 
	I1028 12:45:38.379171  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:1f:ae:b3 in network default
	I1028 12:45:38.379910  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:38.379939  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Ensuring networks are active...
	I1028 12:45:38.380795  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Ensuring network default is active
	I1028 12:45:38.381279  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Ensuring network mk-kubernetes-upgrade-868919 is active
	I1028 12:45:38.381966  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Getting domain xml...
	I1028 12:45:38.382775  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Creating domain...
	I1028 12:45:39.710931  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Waiting to get IP...
	I1028 12:45:39.711887  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:39.714747  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:39.714775  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:39.714696  125411 retry.go:31] will retry after 250.769921ms: waiting for machine to come up
	I1028 12:45:39.967375  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:39.968003  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:39.968034  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:39.967967  125411 retry.go:31] will retry after 324.487073ms: waiting for machine to come up
	I1028 12:45:40.294754  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:40.295334  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:40.295366  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:40.295278  125411 retry.go:31] will retry after 471.861686ms: waiting for machine to come up
	I1028 12:45:40.769146  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:40.769590  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:40.769619  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:40.769542  125411 retry.go:31] will retry after 585.403288ms: waiting for machine to come up
	I1028 12:45:41.356540  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:41.357040  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:41.357071  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:41.356970  125411 retry.go:31] will retry after 486.935432ms: waiting for machine to come up
	I1028 12:45:41.846096  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:41.846632  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:41.846662  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:41.846551  125411 retry.go:31] will retry after 808.449346ms: waiting for machine to come up
	I1028 12:45:42.656947  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:42.657458  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:42.657520  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:42.657439  125411 retry.go:31] will retry after 807.030611ms: waiting for machine to come up
	I1028 12:45:43.465706  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:43.466203  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:43.466232  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:43.466156  125411 retry.go:31] will retry after 991.220522ms: waiting for machine to come up
	I1028 12:45:44.459422  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:44.459894  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:44.459930  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:44.459864  125411 retry.go:31] will retry after 1.761283651s: waiting for machine to come up
	I1028 12:45:46.223869  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:46.224267  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:46.224294  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:46.224211  125411 retry.go:31] will retry after 1.779636752s: waiting for machine to come up
	I1028 12:45:48.006147  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:48.006797  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:48.006830  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:48.006733  125411 retry.go:31] will retry after 1.849954881s: waiting for machine to come up
	I1028 12:45:49.858915  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:49.859340  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:49.859407  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:49.859329  125411 retry.go:31] will retry after 3.483973268s: waiting for machine to come up
	I1028 12:45:53.345262  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:53.345870  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:53.345907  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:53.345816  125411 retry.go:31] will retry after 2.954925301s: waiting for machine to come up
	I1028 12:45:56.842784  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:45:56.843277  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find current IP address of domain kubernetes-upgrade-868919 in network mk-kubernetes-upgrade-868919
	I1028 12:45:56.843314  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | I1028 12:45:56.843244  125411 retry.go:31] will retry after 4.521465297s: waiting for machine to come up
	I1028 12:46:01.368473  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.369129  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has current primary IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.369156  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Found IP for machine: 192.168.61.34
	I1028 12:46:01.369176  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Reserving static IP address...
	I1028 12:46:01.369666  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-868919", mac: "52:54:00:3f:5d:30", ip: "192.168.61.34"} in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.450437  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Getting to WaitForSSH function...
	I1028 12:46:01.450470  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Reserved static IP address: 192.168.61.34
	I1028 12:46:01.450483  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Waiting for SSH to be available...
	I1028 12:46:01.453598  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.454018  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:01.454061  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.454160  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Using SSH client type: external
	I1028 12:46:01.454191  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa (-rw-------)
	I1028 12:46:01.454240  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:46:01.454256  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | About to run SSH command:
	I1028 12:46:01.454268  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | exit 0
	I1028 12:46:01.579652  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | SSH cmd err, output: <nil>: 
	I1028 12:46:01.579908  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) KVM machine creation complete!
	I1028 12:46:01.580275  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetConfigRaw
	I1028 12:46:01.580793  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:01.580930  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:01.581098  125024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:46:01.581113  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetState
	I1028 12:46:01.582371  125024 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:46:01.582385  125024 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:46:01.582392  125024 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:46:01.582400  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:01.585485  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.585861  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:01.585900  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.586042  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:01.586194  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.586374  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.586531  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:01.586720  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:01.586979  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:01.586992  125024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:46:01.690703  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:46:01.690730  125024 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:46:01.690739  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:01.693632  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.693924  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:01.693969  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.694089  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:01.694275  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.694434  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.694562  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:01.694716  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:01.694887  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:01.694898  125024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:46:01.800032  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:46:01.800117  125024 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:46:01.800124  125024 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:46:01.800133  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:46:01.800372  125024 buildroot.go:166] provisioning hostname "kubernetes-upgrade-868919"
	I1028 12:46:01.800403  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:46:01.800603  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:01.803254  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.803604  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:01.803643  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.803807  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:01.803974  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.804147  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.804304  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:01.804444  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:01.804644  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:01.804664  125024 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-868919 && echo "kubernetes-upgrade-868919" | sudo tee /etc/hostname
	I1028 12:46:01.920088  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-868919
	
	I1028 12:46:01.920118  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:01.922769  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.923126  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:01.923158  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:01.923290  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:01.923477  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.923643  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:01.923751  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:01.923938  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:01.924102  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:01.924118  125024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-868919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-868919/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-868919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:46:02.035339  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:46:02.035373  125024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:46:02.035395  125024 buildroot.go:174] setting up certificates
	I1028 12:46:02.035411  125024 provision.go:84] configureAuth start
	I1028 12:46:02.035425  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:46:02.035716  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:46:02.038428  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.038794  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.038819  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.038924  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.041153  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.041433  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.041463  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.041551  125024 provision.go:143] copyHostCerts
	I1028 12:46:02.041649  125024 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:46:02.041668  125024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:46:02.041733  125024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:46:02.041863  125024 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:46:02.041872  125024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:46:02.041901  125024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:46:02.042002  125024 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:46:02.042014  125024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:46:02.042042  125024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:46:02.042125  125024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-868919 san=[127.0.0.1 192.168.61.34 kubernetes-upgrade-868919 localhost minikube]
	I1028 12:46:02.172426  125024 provision.go:177] copyRemoteCerts
	I1028 12:46:02.172511  125024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:46:02.172543  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.175411  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.175774  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.175805  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.176022  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.176208  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.176347  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.176479  125024 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:46:02.257050  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:46:02.278542  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1028 12:46:02.299680  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:46:02.320175  125024 provision.go:87] duration metric: took 284.750891ms to configureAuth
	I1028 12:46:02.320199  125024 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:46:02.320351  125024 config.go:182] Loaded profile config "kubernetes-upgrade-868919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:46:02.320423  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.323071  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.323458  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.323488  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.323669  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.323833  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.323971  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.324067  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.324170  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:02.324368  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:02.324382  125024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:46:02.535418  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:46:02.535445  125024 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:46:02.535454  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetURL
	I1028 12:46:02.536807  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | Using libvirt version 6000000
	I1028 12:46:02.538883  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.539248  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.539277  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.539460  125024 main.go:141] libmachine: Docker is up and running!
	I1028 12:46:02.539473  125024 main.go:141] libmachine: Reticulating splines...
	I1028 12:46:02.539480  125024 client.go:171] duration metric: took 24.630120339s to LocalClient.Create
	I1028 12:46:02.539502  125024 start.go:167] duration metric: took 24.630207066s to libmachine.API.Create "kubernetes-upgrade-868919"
	I1028 12:46:02.539512  125024 start.go:293] postStartSetup for "kubernetes-upgrade-868919" (driver="kvm2")
	I1028 12:46:02.539522  125024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:46:02.539546  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:02.540063  125024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:46:02.540211  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.543403  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.543731  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.543761  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.543882  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.544070  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.544222  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.544355  125024 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:46:02.625570  125024 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:46:02.629164  125024 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:46:02.629189  125024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:46:02.629252  125024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:46:02.629323  125024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:46:02.629407  125024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:46:02.639519  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:46:02.662556  125024 start.go:296] duration metric: took 123.031196ms for postStartSetup
	I1028 12:46:02.662667  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetConfigRaw
	I1028 12:46:02.663295  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:46:02.665621  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.666049  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.666079  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.666273  125024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/config.json ...
	I1028 12:46:02.666452  125024 start.go:128] duration metric: took 24.778097862s to createHost
	I1028 12:46:02.666479  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.668970  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.669309  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.669338  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.669458  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.669643  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.669806  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.669948  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.670100  125024 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:02.670302  125024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:46:02.670315  125024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:46:02.775703  125024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730119562.753453914
	
	I1028 12:46:02.775733  125024 fix.go:216] guest clock: 1730119562.753453914
	I1028 12:46:02.775741  125024 fix.go:229] Guest: 2024-10-28 12:46:02.753453914 +0000 UTC Remote: 2024-10-28 12:46:02.666465233 +0000 UTC m=+49.863474569 (delta=86.988681ms)
	I1028 12:46:02.775783  125024 fix.go:200] guest clock delta is within tolerance: 86.988681ms
	I1028 12:46:02.775789  125024 start.go:83] releasing machines lock for "kubernetes-upgrade-868919", held for 24.887646859s
	I1028 12:46:02.775815  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:02.776086  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:46:02.778988  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.779349  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.779378  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.779553  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:02.780063  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:02.780248  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:46:02.780360  125024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:46:02.780422  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.780442  125024 ssh_runner.go:195] Run: cat /version.json
	I1028 12:46:02.780477  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:46:02.783114  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.783411  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.783442  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.783462  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.783543  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.783718  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.783841  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:02.783864  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.783867  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:02.784037  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:46:02.784045  125024 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:46:02.784168  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:46:02.784281  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:46:02.784445  125024 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:46:02.885840  125024 ssh_runner.go:195] Run: systemctl --version
	I1028 12:46:02.891880  125024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:46:03.059041  125024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:46:03.064926  125024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:46:03.065007  125024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:46:03.080038  125024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:46:03.080065  125024 start.go:495] detecting cgroup driver to use...
	I1028 12:46:03.080142  125024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:46:03.097462  125024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:46:03.111172  125024 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:46:03.111231  125024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:46:03.124104  125024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:46:03.137008  125024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:46:03.248505  125024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:46:03.416957  125024 docker.go:233] disabling docker service ...
	I1028 12:46:03.417038  125024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:46:03.430530  125024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:46:03.443430  125024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:46:03.559359  125024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:46:03.669027  125024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:46:03.682550  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:46:03.699716  125024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:46:03.699772  125024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:03.709013  125024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:46:03.709075  125024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:03.718081  125024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:03.726717  125024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:03.735599  125024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:46:03.744953  125024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:46:03.753074  125024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:46:03.753121  125024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:46:03.765217  125024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:46:03.773856  125024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:46:03.882865  125024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:46:03.989078  125024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:46:03.989168  125024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:46:03.993957  125024 start.go:563] Will wait 60s for crictl version
	I1028 12:46:03.994018  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:03.998155  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:46:04.037943  125024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:46:04.038021  125024 ssh_runner.go:195] Run: crio --version
	I1028 12:46:04.064840  125024 ssh_runner.go:195] Run: crio --version
	I1028 12:46:04.097146  125024 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:46:04.099357  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:46:04.102633  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:04.103039  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:45:52 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:46:04.103073  125024 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:46:04.103318  125024 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:46:04.108238  125024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:46:04.120413  125024 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:46:04.120548  125024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:46:04.120612  125024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:46:04.158422  125024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:46:04.158490  125024 ssh_runner.go:195] Run: which lz4
	I1028 12:46:04.162341  125024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:46:04.166515  125024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:46:04.166582  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:46:05.612980  125024 crio.go:462] duration metric: took 1.450664253s to copy over tarball
	I1028 12:46:05.613073  125024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:46:08.132259  125024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.519143758s)
	I1028 12:46:08.132296  125024 crio.go:469] duration metric: took 2.519282919s to extract the tarball
	I1028 12:46:08.132308  125024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:46:08.175146  125024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:46:08.226959  125024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:46:08.226991  125024 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:46:08.227070  125024 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:08.227099  125024 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.227115  125024 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.227144  125024 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.227151  125024 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.227163  125024 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:46:08.227185  125024 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.227214  125024 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.228594  125024 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.228622  125024 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:08.228606  125024 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.228676  125024 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:46:08.228609  125024 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.228677  125024 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.228608  125024 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.228719  125024 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.386480  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.386499  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.389647  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.397857  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.403539  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.419011  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:46:08.429822  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.510524  125024 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:46:08.510581  125024 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.510639  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.510634  125024 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:46:08.510722  125024 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:46:08.510769  125024 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.510817  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.510730  125024 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.510899  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.515079  125024 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:46:08.515117  125024 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.515153  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.550923  125024 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:46:08.550967  125024 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.551016  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.558269  125024 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:46:08.558323  125024 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:46:08.558373  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.558404  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.558370  125024 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:46:08.558477  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.558503  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.558483  125024 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.558546  125024 ssh_runner.go:195] Run: which crictl
	I1028 12:46:08.558443  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.558522  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.654858  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.654883  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:08.654901  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.655000  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.658994  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.659017  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.659035  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.785291  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:08.785338  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:08.785389  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:08.797287  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:08.797509  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:08.812545  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:08.812544  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.923056  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:46:08.923116  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:46:08.946687  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:08.946719  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:46:08.946738  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:46:08.948924  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:46:08.949137  125024 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:08.987343  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:46:08.988841  125024 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:46:09.206531  125024 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:09.341553  125024 cache_images.go:92] duration metric: took 1.114540804s to LoadCachedImages
	W1028 12:46:09.341666  125024 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1028 12:46:09.341682  125024 kubeadm.go:934] updating node { 192.168.61.34 8443 v1.20.0 crio true true} ...
	I1028 12:46:09.341779  125024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-868919 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:46:09.341876  125024 ssh_runner.go:195] Run: crio config
	I1028 12:46:09.395921  125024 cni.go:84] Creating CNI manager for ""
	I1028 12:46:09.395956  125024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:46:09.395970  125024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:46:09.395999  125024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-868919 NodeName:kubernetes-upgrade-868919 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:46:09.396194  125024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-868919"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:46:09.396272  125024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:46:09.405958  125024 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:46:09.406042  125024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:46:09.415419  125024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1028 12:46:09.430435  125024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:46:09.446221  125024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:46:09.461773  125024 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I1028 12:46:09.465369  125024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:46:09.476976  125024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:46:09.598282  125024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:46:09.613970  125024 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919 for IP: 192.168.61.34
	I1028 12:46:09.613994  125024 certs.go:194] generating shared ca certs ...
	I1028 12:46:09.614012  125024 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:09.614195  125024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:46:09.614246  125024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:46:09.614259  125024 certs.go:256] generating profile certs ...
	I1028 12:46:09.614330  125024 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.key
	I1028 12:46:09.614349  125024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.crt with IP's: []
	I1028 12:46:09.816666  125024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.crt ...
	I1028 12:46:09.816702  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.crt: {Name:mk4364995b20015d2ecac63c281abb1dd79539a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:09.816890  125024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.key ...
	I1028 12:46:09.816906  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.key: {Name:mk0d7ba71c82e12970db7e7a94f672c686fed1c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:09.816993  125024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d
	I1028 12:46:09.817013  125024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt.c2720f4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.34]
	I1028 12:46:10.116523  125024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt.c2720f4d ...
	I1028 12:46:10.116579  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt.c2720f4d: {Name:mk15f8df44a105c43680204e0ccb4565c916b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:10.192235  125024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d ...
	I1028 12:46:10.192283  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d: {Name:mk426416321a6511d933a9f0540654e2d670f0b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:10.192474  125024 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt.c2720f4d -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt
	I1028 12:46:10.192608  125024 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key
	I1028 12:46:10.192693  125024 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key
	I1028 12:46:10.192717  125024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt with IP's: []
	I1028 12:46:10.328287  125024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt ...
	I1028 12:46:10.328323  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt: {Name:mk08c1fb6f88f053ffcb68b99eaf9140bee6d7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:10.337823  125024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key ...
	I1028 12:46:10.337866  125024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key: {Name:mk5ce4ee36bbe36482bbc4fe8de0127e7db82be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:10.338129  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:46:10.338178  125024 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:46:10.338193  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:46:10.338220  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:46:10.338254  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:46:10.338283  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:46:10.338333  125024 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:46:10.339154  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:46:10.369982  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:46:10.393419  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:46:10.419273  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:46:10.443490  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 12:46:10.470326  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:46:10.497127  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:46:10.520358  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:46:10.545528  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:46:10.570026  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:46:10.591732  125024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:46:10.613874  125024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:46:10.629502  125024 ssh_runner.go:195] Run: openssl version
	I1028 12:46:10.635112  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:46:10.645720  125024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:10.650051  125024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:10.650120  125024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:10.655808  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:46:10.667195  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:46:10.678813  125024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:46:10.683296  125024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:46:10.683360  125024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:46:10.689235  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:46:10.700293  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:46:10.711607  125024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:46:10.716003  125024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:46:10.716051  125024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:46:10.721983  125024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:46:10.733770  125024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:46:10.737644  125024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:46:10.737704  125024 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:46:10.737798  125024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:46:10.737866  125024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:46:10.781832  125024 cri.go:89] found id: ""
	I1028 12:46:10.781927  125024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:46:10.791564  125024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:46:10.801028  125024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:46:10.810299  125024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:46:10.810325  125024 kubeadm.go:157] found existing configuration files:
	
	I1028 12:46:10.810381  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:46:10.819507  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:46:10.819593  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:46:10.828421  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:46:10.837320  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:46:10.837413  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:46:10.846341  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:46:10.856345  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:46:10.856424  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:46:10.866807  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:46:10.876780  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:46:10.876865  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:46:10.887185  125024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:46:11.013229  125024 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:46:11.013310  125024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:46:11.169709  125024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:46:11.169894  125024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:46:11.170040  125024 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:46:11.408236  125024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:46:11.535333  125024 out.go:235]   - Generating certificates and keys ...
	I1028 12:46:11.535485  125024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:46:11.535641  125024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:46:11.809648  125024 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:46:11.907231  125024 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:46:12.031119  125024 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:46:12.109195  125024 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:46:12.262149  125024 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:46:12.262408  125024 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	I1028 12:46:12.325370  125024 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:46:12.325592  125024 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	I1028 12:46:12.549441  125024 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:46:12.702121  125024 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:46:12.963623  125024 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:46:12.963952  125024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:46:13.104763  125024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:46:13.260403  125024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:46:13.664448  125024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:46:13.793679  125024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:46:13.820186  125024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:46:13.821318  125024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:46:13.821389  125024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:46:13.938965  125024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:46:13.940750  125024 out.go:235]   - Booting up control plane ...
	I1028 12:46:13.940878  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:46:13.945143  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:46:13.947569  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:46:13.947722  125024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:46:13.954485  125024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:46:53.949541  125024 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:46:53.949786  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:46:53.950063  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:46:58.950450  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:46:58.950680  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:47:08.950060  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:47:08.950333  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:47:28.950109  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:47:28.950450  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:08.952567  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:48:08.952775  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:08.952789  125024 kubeadm.go:310] 
	I1028 12:48:08.952837  125024 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:48:08.952896  125024 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:48:08.952905  125024 kubeadm.go:310] 
	I1028 12:48:08.952952  125024 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:48:08.953000  125024 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:48:08.953115  125024 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:48:08.953120  125024 kubeadm.go:310] 
	I1028 12:48:08.953204  125024 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:48:08.953231  125024 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:48:08.953270  125024 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:48:08.953276  125024 kubeadm.go:310] 
	I1028 12:48:08.953371  125024 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:48:08.953438  125024 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:48:08.953442  125024 kubeadm.go:310] 
	I1028 12:48:08.953524  125024 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:48:08.953600  125024 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:48:08.953662  125024 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:48:08.953720  125024 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:48:08.953725  125024 kubeadm.go:310] 
	I1028 12:48:08.954633  125024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:48:08.954765  125024 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:48:08.954855  125024 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 12:48:08.955027  125024 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-868919 localhost] and IPs [192.168.61.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:48:08.955072  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:48:10.015163  125024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.060056543s)
	I1028 12:48:10.015252  125024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:48:10.034240  125024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:48:10.044013  125024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:48:10.044041  125024 kubeadm.go:157] found existing configuration files:
	
	I1028 12:48:10.044096  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:48:10.055935  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:48:10.056019  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:48:10.067666  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:48:10.082633  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:48:10.082713  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:48:10.094951  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:48:10.106967  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:48:10.107046  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:48:10.119033  125024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:48:10.130415  125024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:48:10.130488  125024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:48:10.142043  125024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:48:10.218634  125024 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:48:10.218763  125024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:48:10.383615  125024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:48:10.383829  125024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:48:10.384015  125024 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:48:10.576916  125024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:48:10.578864  125024 out.go:235]   - Generating certificates and keys ...
	I1028 12:48:10.578982  125024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:48:10.579064  125024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:48:10.579161  125024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:48:10.579246  125024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:48:10.579339  125024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:48:10.579407  125024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:48:10.579487  125024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:48:10.579748  125024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:48:10.580267  125024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:48:10.580693  125024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:48:10.580810  125024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:48:10.580893  125024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:48:10.684161  125024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:48:10.848498  125024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:48:11.120709  125024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:48:11.356708  125024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:48:11.377302  125024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:48:11.378481  125024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:48:11.378570  125024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:48:11.525375  125024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:48:11.528134  125024 out.go:235]   - Booting up control plane ...
	I1028 12:48:11.528275  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:48:11.530121  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:48:11.531131  125024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:48:11.531922  125024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:48:11.534029  125024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:48:51.537406  125024 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:48:51.537855  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:48:51.538148  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:56.538555  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:48:56.538765  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:49:06.539538  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:49:06.539787  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:49:26.538863  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:49:26.539106  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:50:06.539159  125024 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:50:06.539362  125024 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:50:06.539371  125024 kubeadm.go:310] 
	I1028 12:50:06.539428  125024 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:50:06.539465  125024 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:50:06.539472  125024 kubeadm.go:310] 
	I1028 12:50:06.539500  125024 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:50:06.539529  125024 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:50:06.539677  125024 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:50:06.539702  125024 kubeadm.go:310] 
	I1028 12:50:06.539834  125024 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:50:06.539883  125024 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:50:06.539931  125024 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:50:06.539940  125024 kubeadm.go:310] 
	I1028 12:50:06.540075  125024 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:50:06.540165  125024 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:50:06.540173  125024 kubeadm.go:310] 
	I1028 12:50:06.540302  125024 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:50:06.540441  125024 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:50:06.540546  125024 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:50:06.540663  125024 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:50:06.540674  125024 kubeadm.go:310] 
	I1028 12:50:06.541355  125024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:50:06.541458  125024 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:50:06.541536  125024 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:50:06.541624  125024 kubeadm.go:394] duration metric: took 3m55.803924516s to StartCluster
	I1028 12:50:06.541678  125024 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:50:06.541747  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:50:06.579659  125024 cri.go:89] found id: ""
	I1028 12:50:06.579686  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.579694  125024 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:50:06.579701  125024 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:50:06.579754  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:50:06.611368  125024 cri.go:89] found id: ""
	I1028 12:50:06.611401  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.611410  125024 logs.go:284] No container was found matching "etcd"
	I1028 12:50:06.611417  125024 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:50:06.611475  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:50:06.646907  125024 cri.go:89] found id: ""
	I1028 12:50:06.646951  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.646960  125024 logs.go:284] No container was found matching "coredns"
	I1028 12:50:06.646967  125024 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:50:06.647019  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:50:06.681251  125024 cri.go:89] found id: ""
	I1028 12:50:06.681280  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.681291  125024 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:50:06.681300  125024 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:50:06.681359  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:50:06.713803  125024 cri.go:89] found id: ""
	I1028 12:50:06.713832  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.713840  125024 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:50:06.713854  125024 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:50:06.713912  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:50:06.746439  125024 cri.go:89] found id: ""
	I1028 12:50:06.746461  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.746470  125024 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:50:06.746476  125024 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:50:06.746525  125024 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:50:06.783452  125024 cri.go:89] found id: ""
	I1028 12:50:06.783475  125024 logs.go:282] 0 containers: []
	W1028 12:50:06.783483  125024 logs.go:284] No container was found matching "kindnet"
	I1028 12:50:06.783493  125024 logs.go:123] Gathering logs for container status ...
	I1028 12:50:06.783507  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:50:06.818429  125024 logs.go:123] Gathering logs for kubelet ...
	I1028 12:50:06.818460  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:50:06.866574  125024 logs.go:123] Gathering logs for dmesg ...
	I1028 12:50:06.866615  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:50:06.879539  125024 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:50:06.879572  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:50:06.999264  125024 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:50:06.999287  125024 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:50:06.999302  125024 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1028 12:50:07.104047  125024 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:50:07.104116  125024 out.go:270] * 
	* 
	W1028 12:50:07.104180  125024 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:50:07.104195  125024 out.go:270] * 
	* 
	W1028 12:50:07.105061  125024 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:50:07.108063  125024 out.go:201] 
	W1028 12:50:07.109551  125024 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:50:07.109598  125024 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:50:07.109616  125024 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:50:07.111029  125024 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-868919
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-868919: (6.294852499s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-868919 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-868919 status --format={{.Host}}: exit status 7 (75.958527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.381582698s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-868919 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.97366ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-868919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-868919
	    minikube start -p kubernetes-upgrade-868919 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8689192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-868919 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m48.023718293s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-868919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-868919" primary control-plane node in "kubernetes-upgrade-868919" cluster
	* Updating the running kvm2 "kubernetes-upgrade-868919" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:50:54.100975  128715 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:50:54.101122  128715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:50:54.101134  128715 out.go:358] Setting ErrFile to fd 2...
	I1028 12:50:54.101141  128715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:50:54.101452  128715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:50:54.102218  128715 out.go:352] Setting JSON to false
	I1028 12:50:54.103553  128715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9204,"bootTime":1730110650,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:50:54.103666  128715 start.go:139] virtualization: kvm guest
	I1028 12:50:54.105744  128715 out.go:177] * [kubernetes-upgrade-868919] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:50:54.107190  128715 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:50:54.107190  128715 notify.go:220] Checking for updates...
	I1028 12:50:54.108650  128715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:50:54.110054  128715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:50:54.111445  128715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:50:54.112808  128715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:50:54.114226  128715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:50:54.115822  128715 config.go:182] Loaded profile config "kubernetes-upgrade-868919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:50:54.116239  128715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:50:54.116332  128715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:50:54.131106  128715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I1028 12:50:54.131503  128715 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:50:54.132027  128715 main.go:141] libmachine: Using API Version  1
	I1028 12:50:54.132052  128715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:50:54.132379  128715 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:50:54.132549  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:54.132765  128715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:50:54.133038  128715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:50:54.133075  128715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:50:54.147325  128715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1028 12:50:54.147845  128715 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:50:54.148300  128715 main.go:141] libmachine: Using API Version  1
	I1028 12:50:54.148321  128715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:50:54.148687  128715 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:50:54.148883  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:54.184664  128715 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:50:54.185892  128715 start.go:297] selected driver: kvm2
	I1028 12:50:54.185906  128715 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:50:54.185998  128715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:50:54.186649  128715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:50:54.186748  128715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:50:54.200913  128715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:50:54.201312  128715 cni.go:84] Creating CNI manager for ""
	I1028 12:50:54.201366  128715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:50:54.201398  128715 start.go:340] cluster config:
	{Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:50:54.201511  128715 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:50:54.203220  128715 out.go:177] * Starting "kubernetes-upgrade-868919" primary control-plane node in "kubernetes-upgrade-868919" cluster
	I1028 12:50:54.204444  128715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:50:54.204479  128715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:50:54.204491  128715 cache.go:56] Caching tarball of preloaded images
	I1028 12:50:54.204567  128715 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:50:54.204578  128715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:50:54.204665  128715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/config.json ...
	I1028 12:50:54.204831  128715 start.go:360] acquireMachinesLock for kubernetes-upgrade-868919: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:50:54.204872  128715 start.go:364] duration metric: took 24.73µs to acquireMachinesLock for "kubernetes-upgrade-868919"
	I1028 12:50:54.204887  128715 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:50:54.204895  128715 fix.go:54] fixHost starting: 
	I1028 12:50:54.205160  128715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:50:54.205191  128715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:50:54.219356  128715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I1028 12:50:54.219875  128715 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:50:54.220340  128715 main.go:141] libmachine: Using API Version  1
	I1028 12:50:54.220365  128715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:50:54.220777  128715 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:50:54.220959  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:54.221108  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetState
	I1028 12:50:54.222682  128715 fix.go:112] recreateIfNeeded on kubernetes-upgrade-868919: state=Running err=<nil>
	W1028 12:50:54.222701  128715 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:50:54.224315  128715 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-868919" VM ...
	I1028 12:50:54.225448  128715 machine.go:93] provisionDockerMachine start ...
	I1028 12:50:54.225477  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:54.225655  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:54.228047  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.228527  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.228547  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.228635  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:54.228778  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.228922  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.229048  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:54.229192  128715 main.go:141] libmachine: Using SSH client type: native
	I1028 12:50:54.229410  128715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:50:54.229423  128715 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:50:54.332390  128715 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-868919
	
	I1028 12:50:54.332432  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:50:54.332648  128715 buildroot.go:166] provisioning hostname "kubernetes-upgrade-868919"
	I1028 12:50:54.332674  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:50:54.332855  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:54.335494  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.335948  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.335981  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.336137  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:54.336288  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.336411  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.336553  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:54.336733  128715 main.go:141] libmachine: Using SSH client type: native
	I1028 12:50:54.336931  128715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:50:54.336946  128715 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-868919 && echo "kubernetes-upgrade-868919" | sudo tee /etc/hostname
	I1028 12:50:54.457498  128715 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-868919
	
	I1028 12:50:54.457529  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:54.460249  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.460643  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.460677  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.460781  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:54.460962  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.461149  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.461275  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:54.461443  128715 main.go:141] libmachine: Using SSH client type: native
	I1028 12:50:54.461668  128715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:50:54.461685  128715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-868919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-868919/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-868919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:50:54.567970  128715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:50:54.568032  128715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:50:54.568067  128715 buildroot.go:174] setting up certificates
	I1028 12:50:54.568083  128715 provision.go:84] configureAuth start
	I1028 12:50:54.568102  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetMachineName
	I1028 12:50:54.568386  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:50:54.571054  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.571378  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.571408  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.571556  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:54.573565  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.573928  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.573963  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.574100  128715 provision.go:143] copyHostCerts
	I1028 12:50:54.574166  128715 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:50:54.574180  128715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:50:54.574245  128715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:50:54.574341  128715 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:50:54.574349  128715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:50:54.574373  128715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:50:54.574439  128715 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:50:54.574445  128715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:50:54.574466  128715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:50:54.574523  128715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-868919 san=[127.0.0.1 192.168.61.34 kubernetes-upgrade-868919 localhost minikube]
	I1028 12:50:54.800132  128715 provision.go:177] copyRemoteCerts
	I1028 12:50:54.800208  128715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:50:54.800246  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:54.803336  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.803785  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:54.803820  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:54.804035  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:54.804255  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:54.804467  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:54.804628  128715 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:50:54.896698  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1028 12:50:54.958592  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:50:55.005284  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:50:55.045440  128715 provision.go:87] duration metric: took 477.33617ms to configureAuth
	I1028 12:50:55.045494  128715 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:50:55.045712  128715 config.go:182] Loaded profile config "kubernetes-upgrade-868919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:50:55.045798  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:55.048836  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:55.049291  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:55.049329  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:55.049500  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:55.049700  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:55.049881  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:55.050017  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:55.050168  128715 main.go:141] libmachine: Using SSH client type: native
	I1028 12:50:55.050380  128715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:50:55.050405  128715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:50:55.942543  128715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:50:55.942577  128715 machine.go:96] duration metric: took 1.717109018s to provisionDockerMachine
	I1028 12:50:55.942588  128715 start.go:293] postStartSetup for "kubernetes-upgrade-868919" (driver="kvm2")
	I1028 12:50:55.942598  128715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:50:55.942614  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:55.942907  128715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:50:55.942958  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:55.945420  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:55.945765  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:55.945797  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:55.945911  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:55.946066  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:55.946238  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:55.946373  128715 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:50:56.025313  128715 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:50:56.029260  128715 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:50:56.029280  128715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:50:56.029353  128715 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:50:56.029452  128715 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:50:56.029573  128715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:50:56.038662  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:50:56.062449  128715 start.go:296] duration metric: took 119.843447ms for postStartSetup
	I1028 12:50:56.062491  128715 fix.go:56] duration metric: took 1.857596881s for fixHost
	I1028 12:50:56.062518  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:56.064818  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.065200  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:56.065215  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.065400  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:56.065605  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:56.065806  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:56.065971  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:56.066143  128715 main.go:141] libmachine: Using SSH client type: native
	I1028 12:50:56.066345  128715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I1028 12:50:56.066362  128715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:50:56.164117  128715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730119856.130538763
	
	I1028 12:50:56.164142  128715 fix.go:216] guest clock: 1730119856.130538763
	I1028 12:50:56.164150  128715 fix.go:229] Guest: 2024-10-28 12:50:56.130538763 +0000 UTC Remote: 2024-10-28 12:50:56.062495974 +0000 UTC m=+1.999986390 (delta=68.042789ms)
	I1028 12:50:56.164179  128715 fix.go:200] guest clock delta is within tolerance: 68.042789ms
	I1028 12:50:56.164184  128715 start.go:83] releasing machines lock for "kubernetes-upgrade-868919", held for 1.95930303s
	I1028 12:50:56.164204  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:56.164458  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:50:56.166849  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.167216  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:56.167249  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.167380  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:56.167898  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:56.168083  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .DriverName
	I1028 12:50:56.168171  128715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:50:56.168250  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:56.168279  128715 ssh_runner.go:195] Run: cat /version.json
	I1028 12:50:56.168298  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHHostname
	I1028 12:50:56.170796  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.171084  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.171130  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:56.171150  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.171249  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:56.171424  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:56.171453  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:50:56.171482  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:50:56.171579  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:56.171663  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHPort
	I1028 12:50:56.171750  128715 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:50:56.171819  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHKeyPath
	I1028 12:50:56.171946  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetSSHUsername
	I1028 12:50:56.172077  128715 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kubernetes-upgrade-868919/id_rsa Username:docker}
	I1028 12:50:56.249387  128715 ssh_runner.go:195] Run: systemctl --version
	I1028 12:50:56.288129  128715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:50:56.522786  128715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:50:56.552442  128715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:50:56.552530  128715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:50:56.586027  128715 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:50:56.586053  128715 start.go:495] detecting cgroup driver to use...
	I1028 12:50:56.586131  128715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:50:56.651062  128715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:50:56.707974  128715 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:50:56.708044  128715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:50:56.742997  128715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:50:56.781369  128715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:50:57.012196  128715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:50:57.189850  128715 docker.go:233] disabling docker service ...
	I1028 12:50:57.189972  128715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:50:57.207863  128715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:50:57.222224  128715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:50:57.427451  128715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:50:57.585849  128715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:50:57.602372  128715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:50:57.621322  128715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:50:57.621400  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.633482  128715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:50:57.633560  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.643991  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.653430  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.665454  128715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:50:57.677735  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.692297  128715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.704782  128715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:50:57.715680  128715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:50:57.726854  128715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:50:57.736731  128715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:50:57.914172  128715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:52:28.200214  128715 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.286002409s)
	I1028 12:52:28.200252  128715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:52:28.200315  128715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:52:28.206553  128715 start.go:563] Will wait 60s for crictl version
	I1028 12:52:28.206617  128715 ssh_runner.go:195] Run: which crictl
	I1028 12:52:28.210283  128715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:52:28.259917  128715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:52:28.260011  128715 ssh_runner.go:195] Run: crio --version
	I1028 12:52:28.286634  128715 ssh_runner.go:195] Run: crio --version
	I1028 12:52:28.313126  128715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:52:28.314440  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:52:28.317373  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:52:28.317767  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:52:28.317816  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:52:28.318007  128715 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:52:28.321901  128715 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:52:28.322007  128715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:52:28.322066  128715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:52:28.359197  128715 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:52:28.359216  128715 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:52:28.359271  128715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:52:28.389735  128715 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:52:28.389760  128715 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:52:28.389767  128715 kubeadm.go:934] updating node { 192.168.61.34 8443 v1.31.2 crio true true} ...
	I1028 12:52:28.389895  128715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-868919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:52:28.389992  128715 ssh_runner.go:195] Run: crio config
	I1028 12:52:28.441870  128715 cni.go:84] Creating CNI manager for ""
	I1028 12:52:28.441896  128715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:52:28.441908  128715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:52:28.441941  128715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-868919 NodeName:kubernetes-upgrade-868919 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:52:28.442099  128715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-868919"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:52:28.442175  128715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:52:28.451158  128715 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:52:28.451220  128715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:52:28.459646  128715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1028 12:52:28.474807  128715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:52:28.489247  128715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1028 12:52:28.503932  128715 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I1028 12:52:28.507178  128715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:52:28.633544  128715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:52:28.647163  128715 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919 for IP: 192.168.61.34
	I1028 12:52:28.647187  128715 certs.go:194] generating shared ca certs ...
	I1028 12:52:28.647212  128715 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:52:28.647401  128715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:52:28.647468  128715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:52:28.647483  128715 certs.go:256] generating profile certs ...
	I1028 12:52:28.647599  128715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.key
	I1028 12:52:28.647682  128715 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d
	I1028 12:52:28.647734  128715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key
	I1028 12:52:28.647889  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:52:28.647928  128715 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:52:28.647945  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:52:28.647980  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:52:28.648017  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:52:28.648048  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:52:28.648095  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:52:28.649018  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:52:28.670882  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:52:28.692228  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:52:28.713143  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:52:28.734268  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 12:52:28.754710  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:52:28.775433  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:52:28.796128  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:52:28.816557  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:52:28.837645  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:52:28.858623  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:52:28.879864  128715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:52:28.894531  128715 ssh_runner.go:195] Run: openssl version
	I1028 12:52:28.900175  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:52:28.909858  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.913784  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.913824  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.919013  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:52:28.927224  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:52:28.936696  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.941086  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.941137  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.946166  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:52:28.954271  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:52:28.963669  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.967531  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.967575  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.972530  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:52:28.980578  128715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:52:28.984671  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:52:28.989535  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:52:28.994552  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:52:28.999381  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:52:29.004235  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:52:29.009113  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:52:29.013926  128715 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:29.014013  128715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:52:29.014048  128715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:52:29.048041  128715 cri.go:89] found id: "b8f77b666e2669b9f3ed9ad5071932d7e98d7342e21da5bb989c8e72b6f68515"
	I1028 12:52:29.048066  128715 cri.go:89] found id: "2f2e0d4be7fc6383c67993620dc1b6aae0b635d1ecdd327e39f20ef2eaef84e8"
	I1028 12:52:29.048070  128715 cri.go:89] found id: "4883537a712bae8e7c6e1a1a5acbc38dec819294783f8d10baa1c45bb3cafe40"
	I1028 12:52:29.048075  128715 cri.go:89] found id: "1e444d72b8f1dfddd8517f092cb5151ccfdc78978102d7aab68b7ff4756581c4"
	I1028 12:52:29.048078  128715 cri.go:89] found id: "eaed5b11b7e76c1efc03a7190c0b0a31325ce2611f708c79fd6a9ae56173429e"
	I1028 12:52:29.048082  128715 cri.go:89] found id: "2d69bc15a30c50d85d25f569586a1a6a022247959a38bbd2aa0f6ce506aa5300"
	I1028 12:52:29.048084  128715 cri.go:89] found id: "ff39897330a42f5b61a2ece380c65a2a97fb1659c8b4c612c12f8d9934fa173a"
	I1028 12:52:29.048087  128715 cri.go:89] found id: "70fb5d4c3c395d090ffc0beb1509e22466995f9d20f419515877adb6c66d8509"
	I1028 12:52:29.048089  128715 cri.go:89] found id: "4257c2aa210e63cfe5f0ee27cd02e5b957c47f90545ae999b407cc1ed9ea170c"
	I1028 12:52:29.048095  128715 cri.go:89] found id: ""
	I1028 12:52:29.048138  128715 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-868919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-28 13:04:42.092510031 +0000 UTC m=+5263.658081472
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-868919 -n kubernetes-upgrade-868919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-868919 -n kubernetes-upgrade-868919: exit status 2 (229.459411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-868919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-868919 logs -n 25: (2.785335225s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-764199                                 | cert-options-764199       | jenkins | v1.34.0 | 28 Oct 24 12:45 UTC | 28 Oct 24 12:45 UTC |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464    | jenkins | v1.34.0 | 28 Oct 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-232896 stop                            | minikube                  | jenkins | v1.26.0 | 28 Oct 24 12:46 UTC | 28 Oct 24 12:47 UTC |
	| start   | -p stopped-upgrade-232896                              | stopped-upgrade-232896    | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:47 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-717454                              | cert-expiration-717454    | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:48 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-232896                              | stopped-upgrade-232896    | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:47 UTC |
	| start   | -p no-preload-702694                                   | no-preload-702694         | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454    | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470        | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470        | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470        | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694         | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694         | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919 | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919 | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919 | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919 | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470        | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470        | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694         | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694         | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464    | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464    | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464    | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:52:27
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:52:27.656838  129528 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:52:27.656950  129528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:52:27.656959  129528 out.go:358] Setting ErrFile to fd 2...
	I1028 12:52:27.656963  129528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:52:27.657136  129528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:52:27.657741  129528 out.go:352] Setting JSON to false
	I1028 12:52:27.658727  129528 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9298,"bootTime":1730110650,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:52:27.658827  129528 start.go:139] virtualization: kvm guest
	I1028 12:52:27.661911  129528 out.go:177] * [old-k8s-version-733464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:52:27.663379  129528 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:52:27.663430  129528 notify.go:220] Checking for updates...
	I1028 12:52:27.666327  129528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:52:27.667617  129528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:52:27.668894  129528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:52:27.670255  129528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:52:27.671486  129528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:52:27.673037  129528 config.go:182] Loaded profile config "old-k8s-version-733464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:52:27.673442  129528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:52:27.673486  129528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:52:27.688471  129528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I1028 12:52:27.688951  129528 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:52:27.689585  129528 main.go:141] libmachine: Using API Version  1
	I1028 12:52:27.689612  129528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:52:27.689962  129528 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:52:27.690135  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:52:27.691892  129528 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:52:27.693082  129528 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:52:27.693409  129528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:52:27.693451  129528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:52:27.707942  129528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I1028 12:52:27.708363  129528 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:52:27.708825  129528 main.go:141] libmachine: Using API Version  1
	I1028 12:52:27.708851  129528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:52:27.709165  129528 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:52:27.709370  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:52:27.743741  129528 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:52:27.744974  129528 start.go:297] selected driver: kvm2
	I1028 12:52:27.744986  129528 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:27.745097  129528 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:52:27.745798  129528 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:52:27.745895  129528 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:52:27.760100  129528 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:52:27.760465  129528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:52:27.760495  129528 cni.go:84] Creating CNI manager for ""
	I1028 12:52:27.760542  129528 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:52:27.760583  129528 start.go:340] cluster config:
	{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:27.760681  129528 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:52:27.762428  129528 out.go:177] * Starting "old-k8s-version-733464" primary control-plane node in "old-k8s-version-733464" cluster
	I1028 12:52:28.200214  128715 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.286002409s)
	I1028 12:52:28.200252  128715 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:52:28.200315  128715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:52:28.206553  128715 start.go:563] Will wait 60s for crictl version
	I1028 12:52:28.206617  128715 ssh_runner.go:195] Run: which crictl
	I1028 12:52:28.210283  128715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:52:28.259917  128715 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:52:28.260011  128715 ssh_runner.go:195] Run: crio --version
	I1028 12:52:28.286634  128715 ssh_runner.go:195] Run: crio --version
	I1028 12:52:28.313126  128715 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:52:28.314440  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) Calling .GetIP
	I1028 12:52:28.317373  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:52:28.317767  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:5d:30", ip: ""} in network mk-kubernetes-upgrade-868919: {Iface:virbr3 ExpiryTime:2024-10-28 13:50:24 +0000 UTC Type:0 Mac:52:54:00:3f:5d:30 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:kubernetes-upgrade-868919 Clientid:01:52:54:00:3f:5d:30}
	I1028 12:52:28.317816  128715 main.go:141] libmachine: (kubernetes-upgrade-868919) DBG | domain kubernetes-upgrade-868919 has defined IP address 192.168.61.34 and MAC address 52:54:00:3f:5d:30 in network mk-kubernetes-upgrade-868919
	I1028 12:52:28.318007  128715 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:52:28.321901  128715 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:52:28.322007  128715 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:52:28.322066  128715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:52:28.359197  128715 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:52:28.359216  128715 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:52:28.359271  128715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:52:28.389735  128715 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:52:28.389760  128715 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:52:28.389767  128715 kubeadm.go:934] updating node { 192.168.61.34 8443 v1.31.2 crio true true} ...
	I1028 12:52:28.389895  128715 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-868919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:52:28.389992  128715 ssh_runner.go:195] Run: crio config
	I1028 12:52:28.441870  128715 cni.go:84] Creating CNI manager for ""
	I1028 12:52:28.441896  128715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:52:28.441908  128715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:52:28.441941  128715 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-868919 NodeName:kubernetes-upgrade-868919 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:52:28.442099  128715 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-868919"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:52:28.442175  128715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:52:28.451158  128715 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:52:28.451220  128715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:52:28.459646  128715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1028 12:52:28.474807  128715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:52:28.489247  128715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1028 12:52:28.503932  128715 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I1028 12:52:28.507178  128715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:52:28.633544  128715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:52:28.647163  128715 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919 for IP: 192.168.61.34
	I1028 12:52:28.647187  128715 certs.go:194] generating shared ca certs ...
	I1028 12:52:28.647212  128715 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:52:28.647401  128715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:52:28.647468  128715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:52:28.647483  128715 certs.go:256] generating profile certs ...
	I1028 12:52:28.647599  128715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/client.key
	I1028 12:52:28.647682  128715 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key.c2720f4d
	I1028 12:52:28.647734  128715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key
	I1028 12:52:28.647889  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:52:28.647928  128715 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:52:28.647945  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:52:28.647980  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:52:28.648017  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:52:28.648048  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:52:28.648095  128715 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:52:28.649018  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:52:28.670882  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:52:28.692228  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:52:28.713143  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:52:28.734268  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 12:52:28.754710  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:52:28.775433  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:52:28.796128  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kubernetes-upgrade-868919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:52:28.816557  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:52:28.837645  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:52:28.858623  128715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:52:28.879864  128715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:52:28.894531  128715 ssh_runner.go:195] Run: openssl version
	I1028 12:52:28.900175  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:52:28.909858  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.913784  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.913824  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:52:28.919013  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:52:28.927224  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:52:28.936696  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.941086  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.941137  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:52:28.946166  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:52:28.954271  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:52:28.963669  128715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.967531  128715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.967575  128715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:52:28.972530  128715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:52:28.980578  128715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:52:28.984671  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:52:28.989535  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:52:28.994552  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:52:28.999381  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:52:29.004235  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:52:29.009113  128715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:52:29.013926  128715 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-868919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-868919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:29.014013  128715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:52:29.014048  128715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:52:29.048041  128715 cri.go:89] found id: "b8f77b666e2669b9f3ed9ad5071932d7e98d7342e21da5bb989c8e72b6f68515"
	I1028 12:52:29.048066  128715 cri.go:89] found id: "2f2e0d4be7fc6383c67993620dc1b6aae0b635d1ecdd327e39f20ef2eaef84e8"
	I1028 12:52:29.048070  128715 cri.go:89] found id: "4883537a712bae8e7c6e1a1a5acbc38dec819294783f8d10baa1c45bb3cafe40"
	I1028 12:52:29.048075  128715 cri.go:89] found id: "1e444d72b8f1dfddd8517f092cb5151ccfdc78978102d7aab68b7ff4756581c4"
	I1028 12:52:29.048078  128715 cri.go:89] found id: "eaed5b11b7e76c1efc03a7190c0b0a31325ce2611f708c79fd6a9ae56173429e"
	I1028 12:52:29.048082  128715 cri.go:89] found id: "2d69bc15a30c50d85d25f569586a1a6a022247959a38bbd2aa0f6ce506aa5300"
	I1028 12:52:29.048084  128715 cri.go:89] found id: "ff39897330a42f5b61a2ece380c65a2a97fb1659c8b4c612c12f8d9934fa173a"
	I1028 12:52:29.048087  128715 cri.go:89] found id: "70fb5d4c3c395d090ffc0beb1509e22466995f9d20f419515877adb6c66d8509"
	I1028 12:52:29.048089  128715 cri.go:89] found id: "4257c2aa210e63cfe5f0ee27cd02e5b957c47f90545ae999b407cc1ed9ea170c"
	I1028 12:52:29.048095  128715 cri.go:89] found id: ""
	I1028 12:52:29.048138  128715 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-868919 -n kubernetes-upgrade-868919
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-868919 -n kubernetes-upgrade-868919: exit status 2 (217.533281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-868919" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-868919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-868919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-868919: (1.142761131s)
--- FAIL: TestKubernetesUpgrade (1173.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (292.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m52.296771367s)

                                                
                                                
-- stdout --
	* [old-k8s-version-733464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-733464" primary control-plane node in "old-k8s-version-733464" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:45:57.044787  125765 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:45:57.044927  125765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:45:57.044938  125765 out.go:358] Setting ErrFile to fd 2...
	I1028 12:45:57.044943  125765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:45:57.045113  125765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:45:57.045715  125765 out.go:352] Setting JSON to false
	I1028 12:45:57.046635  125765 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8907,"bootTime":1730110650,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:45:57.046736  125765 start.go:139] virtualization: kvm guest
	I1028 12:45:57.049041  125765 out.go:177] * [old-k8s-version-733464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:45:57.050366  125765 notify.go:220] Checking for updates...
	I1028 12:45:57.050377  125765 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:45:57.051604  125765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:45:57.052953  125765 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:45:57.054151  125765 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:45:57.055361  125765 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:45:57.056460  125765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:45:57.057946  125765 config.go:182] Loaded profile config "cert-expiration-717454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:45:57.058035  125765 config.go:182] Loaded profile config "kubernetes-upgrade-868919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:45:57.058116  125765 config.go:182] Loaded profile config "stopped-upgrade-232896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:45:57.058181  125765 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:45:57.098581  125765 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:45:57.099884  125765 start.go:297] selected driver: kvm2
	I1028 12:45:57.099898  125765 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:45:57.099914  125765 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:45:57.100590  125765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:45:57.100696  125765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:45:57.115235  125765 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:45:57.115280  125765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:45:57.115610  125765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:45:57.115683  125765 cni.go:84] Creating CNI manager for ""
	I1028 12:45:57.115754  125765 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:45:57.115770  125765 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:45:57.115841  125765 start.go:340] cluster config:
	{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:45:57.116013  125765 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:45:57.117684  125765 out.go:177] * Starting "old-k8s-version-733464" primary control-plane node in "old-k8s-version-733464" cluster
	I1028 12:45:57.118884  125765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:45:57.118929  125765 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:45:57.118942  125765 cache.go:56] Caching tarball of preloaded images
	I1028 12:45:57.119046  125765 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:45:57.119069  125765 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:45:57.119184  125765 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/config.json ...
	I1028 12:45:57.119224  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/config.json: {Name:mk9c2b42ff32d5e841a83d7e45df985636a863e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:45:57.119389  125765 start.go:360] acquireMachinesLock for old-k8s-version-733464: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:46:23.703568  125765 start.go:364] duration metric: took 26.584139429s to acquireMachinesLock for "old-k8s-version-733464"
	I1028 12:46:23.703682  125765 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:46:23.703814  125765 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:46:23.706760  125765 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:46:23.706962  125765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:46:23.707018  125765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:46:23.723026  125765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I1028 12:46:23.723443  125765 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:46:23.724092  125765 main.go:141] libmachine: Using API Version  1
	I1028 12:46:23.724120  125765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:46:23.724533  125765 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:46:23.724751  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:46:23.724915  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:23.725103  125765 start.go:159] libmachine.API.Create for "old-k8s-version-733464" (driver="kvm2")
	I1028 12:46:23.725138  125765 client.go:168] LocalClient.Create starting
	I1028 12:46:23.725181  125765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 12:46:23.725225  125765 main.go:141] libmachine: Decoding PEM data...
	I1028 12:46:23.725249  125765 main.go:141] libmachine: Parsing certificate...
	I1028 12:46:23.725334  125765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 12:46:23.725362  125765 main.go:141] libmachine: Decoding PEM data...
	I1028 12:46:23.725382  125765 main.go:141] libmachine: Parsing certificate...
	I1028 12:46:23.725408  125765 main.go:141] libmachine: Running pre-create checks...
	I1028 12:46:23.725427  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .PreCreateCheck
	I1028 12:46:23.725767  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetConfigRaw
	I1028 12:46:23.726210  125765 main.go:141] libmachine: Creating machine...
	I1028 12:46:23.726232  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .Create
	I1028 12:46:23.726403  125765 main.go:141] libmachine: (old-k8s-version-733464) Creating KVM machine...
	I1028 12:46:23.727550  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found existing default KVM network
	I1028 12:46:23.729048  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:23.728852  126105 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011cba0}
	I1028 12:46:23.729078  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | created network xml: 
	I1028 12:46:23.729093  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | <network>
	I1028 12:46:23.729126  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   <name>mk-old-k8s-version-733464</name>
	I1028 12:46:23.729141  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   <dns enable='no'/>
	I1028 12:46:23.729148  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   
	I1028 12:46:23.729153  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 12:46:23.729159  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |     <dhcp>
	I1028 12:46:23.729165  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 12:46:23.729170  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |     </dhcp>
	I1028 12:46:23.729175  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   </ip>
	I1028 12:46:23.729182  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG |   
	I1028 12:46:23.729188  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | </network>
	I1028 12:46:23.729206  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | 
	I1028 12:46:23.734312  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | trying to create private KVM network mk-old-k8s-version-733464 192.168.39.0/24...
	I1028 12:46:23.804915  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | private KVM network mk-old-k8s-version-733464 192.168.39.0/24 created
	I1028 12:46:23.804954  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464 ...
	I1028 12:46:23.804971  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:23.804902  126105 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:46:23.804990  125765 main.go:141] libmachine: (old-k8s-version-733464) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 12:46:23.805062  125765 main.go:141] libmachine: (old-k8s-version-733464) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:46:24.075985  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:24.075836  126105 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa...
	I1028 12:46:24.320205  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:24.320084  126105 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/old-k8s-version-733464.rawdisk...
	I1028 12:46:24.320236  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Writing magic tar header
	I1028 12:46:24.320249  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Writing SSH key tar header
	I1028 12:46:24.320257  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:24.320212  126105 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464 ...
	I1028 12:46:24.320347  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464
	I1028 12:46:24.320374  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 12:46:24.320390  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464 (perms=drwx------)
	I1028 12:46:24.320411  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 12:46:24.320421  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:46:24.320432  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 12:46:24.320482  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 12:46:24.320506  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 12:46:24.320518  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 12:46:24.320535  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home/jenkins
	I1028 12:46:24.320549  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Checking permissions on dir: /home
	I1028 12:46:24.320560  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Skipping /home - not owner
	I1028 12:46:24.320577  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 12:46:24.320589  125765 main.go:141] libmachine: (old-k8s-version-733464) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 12:46:24.320599  125765 main.go:141] libmachine: (old-k8s-version-733464) Creating domain...
	I1028 12:46:24.321703  125765 main.go:141] libmachine: (old-k8s-version-733464) define libvirt domain using xml: 
	I1028 12:46:24.321731  125765 main.go:141] libmachine: (old-k8s-version-733464) <domain type='kvm'>
	I1028 12:46:24.321742  125765 main.go:141] libmachine: (old-k8s-version-733464)   <name>old-k8s-version-733464</name>
	I1028 12:46:24.321750  125765 main.go:141] libmachine: (old-k8s-version-733464)   <memory unit='MiB'>2200</memory>
	I1028 12:46:24.321759  125765 main.go:141] libmachine: (old-k8s-version-733464)   <vcpu>2</vcpu>
	I1028 12:46:24.321769  125765 main.go:141] libmachine: (old-k8s-version-733464)   <features>
	I1028 12:46:24.321775  125765 main.go:141] libmachine: (old-k8s-version-733464)     <acpi/>
	I1028 12:46:24.321779  125765 main.go:141] libmachine: (old-k8s-version-733464)     <apic/>
	I1028 12:46:24.321786  125765 main.go:141] libmachine: (old-k8s-version-733464)     <pae/>
	I1028 12:46:24.321791  125765 main.go:141] libmachine: (old-k8s-version-733464)     
	I1028 12:46:24.321800  125765 main.go:141] libmachine: (old-k8s-version-733464)   </features>
	I1028 12:46:24.321818  125765 main.go:141] libmachine: (old-k8s-version-733464)   <cpu mode='host-passthrough'>
	I1028 12:46:24.321825  125765 main.go:141] libmachine: (old-k8s-version-733464)   
	I1028 12:46:24.321830  125765 main.go:141] libmachine: (old-k8s-version-733464)   </cpu>
	I1028 12:46:24.321834  125765 main.go:141] libmachine: (old-k8s-version-733464)   <os>
	I1028 12:46:24.321841  125765 main.go:141] libmachine: (old-k8s-version-733464)     <type>hvm</type>
	I1028 12:46:24.321869  125765 main.go:141] libmachine: (old-k8s-version-733464)     <boot dev='cdrom'/>
	I1028 12:46:24.321893  125765 main.go:141] libmachine: (old-k8s-version-733464)     <boot dev='hd'/>
	I1028 12:46:24.321903  125765 main.go:141] libmachine: (old-k8s-version-733464)     <bootmenu enable='no'/>
	I1028 12:46:24.321910  125765 main.go:141] libmachine: (old-k8s-version-733464)   </os>
	I1028 12:46:24.321919  125765 main.go:141] libmachine: (old-k8s-version-733464)   <devices>
	I1028 12:46:24.321929  125765 main.go:141] libmachine: (old-k8s-version-733464)     <disk type='file' device='cdrom'>
	I1028 12:46:24.321944  125765 main.go:141] libmachine: (old-k8s-version-733464)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/boot2docker.iso'/>
	I1028 12:46:24.321953  125765 main.go:141] libmachine: (old-k8s-version-733464)       <target dev='hdc' bus='scsi'/>
	I1028 12:46:24.321962  125765 main.go:141] libmachine: (old-k8s-version-733464)       <readonly/>
	I1028 12:46:24.321977  125765 main.go:141] libmachine: (old-k8s-version-733464)     </disk>
	I1028 12:46:24.321991  125765 main.go:141] libmachine: (old-k8s-version-733464)     <disk type='file' device='disk'>
	I1028 12:46:24.322006  125765 main.go:141] libmachine: (old-k8s-version-733464)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 12:46:24.322023  125765 main.go:141] libmachine: (old-k8s-version-733464)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/old-k8s-version-733464.rawdisk'/>
	I1028 12:46:24.322034  125765 main.go:141] libmachine: (old-k8s-version-733464)       <target dev='hda' bus='virtio'/>
	I1028 12:46:24.322046  125765 main.go:141] libmachine: (old-k8s-version-733464)     </disk>
	I1028 12:46:24.322062  125765 main.go:141] libmachine: (old-k8s-version-733464)     <interface type='network'>
	I1028 12:46:24.322114  125765 main.go:141] libmachine: (old-k8s-version-733464)       <source network='mk-old-k8s-version-733464'/>
	I1028 12:46:24.322130  125765 main.go:141] libmachine: (old-k8s-version-733464)       <model type='virtio'/>
	I1028 12:46:24.322160  125765 main.go:141] libmachine: (old-k8s-version-733464)     </interface>
	I1028 12:46:24.322183  125765 main.go:141] libmachine: (old-k8s-version-733464)     <interface type='network'>
	I1028 12:46:24.322197  125765 main.go:141] libmachine: (old-k8s-version-733464)       <source network='default'/>
	I1028 12:46:24.322212  125765 main.go:141] libmachine: (old-k8s-version-733464)       <model type='virtio'/>
	I1028 12:46:24.322225  125765 main.go:141] libmachine: (old-k8s-version-733464)     </interface>
	I1028 12:46:24.322235  125765 main.go:141] libmachine: (old-k8s-version-733464)     <serial type='pty'>
	I1028 12:46:24.322260  125765 main.go:141] libmachine: (old-k8s-version-733464)       <target port='0'/>
	I1028 12:46:24.322270  125765 main.go:141] libmachine: (old-k8s-version-733464)     </serial>
	I1028 12:46:24.322281  125765 main.go:141] libmachine: (old-k8s-version-733464)     <console type='pty'>
	I1028 12:46:24.322297  125765 main.go:141] libmachine: (old-k8s-version-733464)       <target type='serial' port='0'/>
	I1028 12:46:24.322309  125765 main.go:141] libmachine: (old-k8s-version-733464)     </console>
	I1028 12:46:24.322319  125765 main.go:141] libmachine: (old-k8s-version-733464)     <rng model='virtio'>
	I1028 12:46:24.322334  125765 main.go:141] libmachine: (old-k8s-version-733464)       <backend model='random'>/dev/random</backend>
	I1028 12:46:24.322342  125765 main.go:141] libmachine: (old-k8s-version-733464)     </rng>
	I1028 12:46:24.322353  125765 main.go:141] libmachine: (old-k8s-version-733464)     
	I1028 12:46:24.322368  125765 main.go:141] libmachine: (old-k8s-version-733464)     
	I1028 12:46:24.322379  125765 main.go:141] libmachine: (old-k8s-version-733464)   </devices>
	I1028 12:46:24.322389  125765 main.go:141] libmachine: (old-k8s-version-733464) </domain>
	I1028 12:46:24.322401  125765 main.go:141] libmachine: (old-k8s-version-733464) 
	I1028 12:46:24.328707  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:52:b5:f4 in network default
	I1028 12:46:24.329286  125765 main.go:141] libmachine: (old-k8s-version-733464) Ensuring networks are active...
	I1028 12:46:24.329312  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:24.330067  125765 main.go:141] libmachine: (old-k8s-version-733464) Ensuring network default is active
	I1028 12:46:24.330380  125765 main.go:141] libmachine: (old-k8s-version-733464) Ensuring network mk-old-k8s-version-733464 is active
	I1028 12:46:24.330880  125765 main.go:141] libmachine: (old-k8s-version-733464) Getting domain xml...
	I1028 12:46:24.331494  125765 main.go:141] libmachine: (old-k8s-version-733464) Creating domain...
	I1028 12:46:25.542563  125765 main.go:141] libmachine: (old-k8s-version-733464) Waiting to get IP...
	I1028 12:46:25.543334  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:25.543812  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:25.543838  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:25.543790  126105 retry.go:31] will retry after 221.731725ms: waiting for machine to come up
	I1028 12:46:25.767276  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:25.767850  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:25.767889  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:25.767798  126105 retry.go:31] will retry after 290.369811ms: waiting for machine to come up
	I1028 12:46:26.060460  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:26.060846  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:26.060876  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:26.060819  126105 retry.go:31] will retry after 426.769136ms: waiting for machine to come up
	I1028 12:46:26.489473  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:26.489990  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:26.490013  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:26.489936  126105 retry.go:31] will retry after 546.410859ms: waiting for machine to come up
	I1028 12:46:27.038154  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:27.038855  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:27.038884  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:27.038801  126105 retry.go:31] will retry after 498.582139ms: waiting for machine to come up
	I1028 12:46:27.539758  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:27.540161  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:27.540183  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:27.540129  126105 retry.go:31] will retry after 635.048198ms: waiting for machine to come up
	I1028 12:46:28.176882  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:28.177393  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:28.177446  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:28.177335  126105 retry.go:31] will retry after 1.040784224s: waiting for machine to come up
	I1028 12:46:29.219535  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:29.220050  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:29.220084  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:29.219967  126105 retry.go:31] will retry after 913.738827ms: waiting for machine to come up
	I1028 12:46:30.135089  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:30.135611  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:30.135658  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:30.135547  126105 retry.go:31] will retry after 1.363607843s: waiting for machine to come up
	I1028 12:46:31.500912  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:31.501572  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:31.501595  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:31.501516  126105 retry.go:31] will retry after 2.128982045s: waiting for machine to come up
	I1028 12:46:33.632955  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:33.633587  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:33.633623  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:33.633518  126105 retry.go:31] will retry after 2.536794947s: waiting for machine to come up
	I1028 12:46:36.172648  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:36.173178  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:36.173204  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:36.173103  126105 retry.go:31] will retry after 3.511133899s: waiting for machine to come up
	I1028 12:46:39.685658  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:39.686231  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:46:39.686250  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:46:39.686189  126105 retry.go:31] will retry after 4.096916533s: waiting for machine to come up
	I1028 12:46:43.784755  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:43.785285  125765 main.go:141] libmachine: (old-k8s-version-733464) Found IP for machine: 192.168.39.208
	I1028 12:46:43.785327  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has current primary IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:43.785338  125765 main.go:141] libmachine: (old-k8s-version-733464) Reserving static IP address...
	I1028 12:46:43.785655  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-733464", mac: "52:54:00:cf:6c:2d", ip: "192.168.39.208"} in network mk-old-k8s-version-733464
	I1028 12:46:43.865956  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Getting to WaitForSSH function...
	I1028 12:46:43.865989  125765 main.go:141] libmachine: (old-k8s-version-733464) Reserved static IP address: 192.168.39.208
	I1028 12:46:43.866002  125765 main.go:141] libmachine: (old-k8s-version-733464) Waiting for SSH to be available...
	I1028 12:46:43.868945  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:43.869334  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:43.869371  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:43.869518  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Using SSH client type: external
	I1028 12:46:43.869546  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa (-rw-------)
	I1028 12:46:43.869587  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:46:43.869605  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | About to run SSH command:
	I1028 12:46:43.869622  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | exit 0
	I1028 12:46:43.999540  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | SSH cmd err, output: <nil>: 
	I1028 12:46:43.999862  125765 main.go:141] libmachine: (old-k8s-version-733464) KVM machine creation complete!
	I1028 12:46:44.000192  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetConfigRaw
	I1028 12:46:44.000790  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:44.000979  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:44.001188  125765 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:46:44.001205  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetState
	I1028 12:46:44.002721  125765 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:46:44.002740  125765 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:46:44.002748  125765 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:46:44.002756  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.005246  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.005612  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.005658  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.005786  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.005956  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.006144  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.006288  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.006453  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:44.006709  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:44.006727  125765 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:46:44.114779  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:46:44.114813  125765 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:46:44.114824  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.117866  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.118283  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.118305  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.118499  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.118696  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.118872  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.119005  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.119144  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:44.119336  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:44.119347  125765 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:46:44.228878  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:46:44.228948  125765 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:46:44.228959  125765 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:46:44.228972  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:46:44.229214  125765 buildroot.go:166] provisioning hostname "old-k8s-version-733464"
	I1028 12:46:44.229240  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:46:44.229444  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.232282  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.232643  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.232681  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.232816  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.232991  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.233145  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.233277  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.233433  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:44.233603  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:44.233615  125765 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-733464 && echo "old-k8s-version-733464" | sudo tee /etc/hostname
	I1028 12:46:44.356189  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-733464
	
	I1028 12:46:44.356222  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.359384  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.359916  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.359946  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.360187  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.360375  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.360500  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.360709  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.361006  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:44.361251  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:44.361277  125765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-733464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-733464/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-733464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:46:44.481779  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:46:44.481820  125765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:46:44.481854  125765 buildroot.go:174] setting up certificates
	I1028 12:46:44.481870  125765 provision.go:84] configureAuth start
	I1028 12:46:44.481887  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:46:44.482196  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:46:44.485537  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.486052  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.486086  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.486281  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.488565  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.488875  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.488904  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.489087  125765 provision.go:143] copyHostCerts
	I1028 12:46:44.489169  125765 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:46:44.489185  125765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:46:44.489233  125765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:46:44.489333  125765 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:46:44.489342  125765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:46:44.489363  125765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:46:44.489413  125765 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:46:44.489420  125765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:46:44.489449  125765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:46:44.489493  125765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-733464 san=[127.0.0.1 192.168.39.208 localhost minikube old-k8s-version-733464]
	I1028 12:46:44.713587  125765 provision.go:177] copyRemoteCerts
	I1028 12:46:44.713645  125765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:46:44.713671  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.716433  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.716765  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.716792  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.716965  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.717155  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.717299  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.717455  125765 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:46:44.800567  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:46:44.822544  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:46:44.843351  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:46:44.865084  125765 provision.go:87] duration metric: took 383.195968ms to configureAuth
	I1028 12:46:44.865114  125765 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:46:44.865300  125765 config.go:182] Loaded profile config "old-k8s-version-733464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:46:44.865386  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:44.868076  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.868402  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:44.868432  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:44.868624  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:44.868817  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.868980  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:44.869152  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:44.869289  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:44.869477  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:44.869492  125765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:46:45.085485  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:46:45.085524  125765 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:46:45.085538  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetURL
	I1028 12:46:45.086809  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | Using libvirt version 6000000
	I1028 12:46:45.088940  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.089290  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.089322  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.089488  125765 main.go:141] libmachine: Docker is up and running!
	I1028 12:46:45.089505  125765 main.go:141] libmachine: Reticulating splines...
	I1028 12:46:45.089513  125765 client.go:171] duration metric: took 21.364366491s to LocalClient.Create
	I1028 12:46:45.089541  125765 start.go:167] duration metric: took 21.364436598s to libmachine.API.Create "old-k8s-version-733464"
	I1028 12:46:45.089556  125765 start.go:293] postStartSetup for "old-k8s-version-733464" (driver="kvm2")
	I1028 12:46:45.089567  125765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:46:45.089586  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:45.089808  125765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:46:45.089833  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:45.091917  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.092203  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.092223  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.092387  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:45.092553  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:45.092724  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:45.092824  125765 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:46:45.177235  125765 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:46:45.181422  125765 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:46:45.181445  125765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:46:45.181500  125765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:46:45.181570  125765 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:46:45.181658  125765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:46:45.189779  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:46:45.213847  125765 start.go:296] duration metric: took 124.275998ms for postStartSetup
	I1028 12:46:45.213899  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetConfigRaw
	I1028 12:46:45.214642  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:46:45.217639  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.218113  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.218155  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.218451  125765 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/config.json ...
	I1028 12:46:45.218688  125765 start.go:128] duration metric: took 21.514860196s to createHost
	I1028 12:46:45.218718  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:45.221275  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.221631  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.221657  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.221762  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:45.221961  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:45.222137  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:45.222291  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:45.222438  125765 main.go:141] libmachine: Using SSH client type: native
	I1028 12:46:45.222613  125765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:46:45.222634  125765 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:46:45.329098  125765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730119605.288851317
	
	I1028 12:46:45.329123  125765 fix.go:216] guest clock: 1730119605.288851317
	I1028 12:46:45.329132  125765 fix.go:229] Guest: 2024-10-28 12:46:45.288851317 +0000 UTC Remote: 2024-10-28 12:46:45.218703308 +0000 UTC m=+48.212673423 (delta=70.148009ms)
	I1028 12:46:45.329170  125765 fix.go:200] guest clock delta is within tolerance: 70.148009ms
	I1028 12:46:45.329177  125765 start.go:83] releasing machines lock for "old-k8s-version-733464", held for 21.625551488s
	I1028 12:46:45.329200  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:45.329487  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:46:45.332594  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.332993  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.333025  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.333266  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:45.333811  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:45.334025  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:46:45.334122  125765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:46:45.334180  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:45.334313  125765 ssh_runner.go:195] Run: cat /version.json
	I1028 12:46:45.334343  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:46:45.337152  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.337176  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.337533  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.337559  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.337585  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:45.337606  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:45.337876  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:45.337880  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:46:45.338058  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:45.338101  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:46:45.338285  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:45.338294  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:46:45.338485  125765 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:46:45.338491  125765 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:46:45.440557  125765 ssh_runner.go:195] Run: systemctl --version
	I1028 12:46:45.447154  125765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:46:45.604512  125765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:46:45.611967  125765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:46:45.612051  125765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:46:45.631252  125765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:46:45.631279  125765 start.go:495] detecting cgroup driver to use...
	I1028 12:46:45.631346  125765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:46:45.646567  125765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:46:45.660993  125765 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:46:45.661054  125765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:46:45.674392  125765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:46:45.693578  125765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:46:45.822142  125765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:46:45.984973  125765 docker.go:233] disabling docker service ...
	I1028 12:46:45.985046  125765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:46:45.998653  125765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:46:46.013419  125765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:46:46.162564  125765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:46:46.291087  125765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:46:46.304055  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:46:46.320952  125765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:46:46.321030  125765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:46.330186  125765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:46:46.330260  125765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:46.339419  125765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:46.348464  125765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:46:46.358881  125765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:46:46.369468  125765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:46:46.379256  125765 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:46:46.379316  125765 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:46:46.390884  125765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:46:46.400004  125765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:46:46.527011  125765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:46:46.618997  125765 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:46:46.619076  125765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:46:46.623606  125765 start.go:563] Will wait 60s for crictl version
	I1028 12:46:46.623679  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:46.627227  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:46:46.668434  125765 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:46:46.668534  125765 ssh_runner.go:195] Run: crio --version
	I1028 12:46:46.697844  125765 ssh_runner.go:195] Run: crio --version
	I1028 12:46:46.728403  125765 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:46:46.729437  125765 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:46:46.732226  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:46.732590  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:46:38 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:46:46.732630  125765 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:46:46.732805  125765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:46:46.736472  125765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:46:46.747909  125765 kubeadm.go:883] updating cluster {Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:46:46.748012  125765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:46:46.748063  125765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:46:46.780701  125765 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:46:46.780781  125765 ssh_runner.go:195] Run: which lz4
	I1028 12:46:46.784273  125765 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:46:46.787976  125765 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:46:46.788004  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:46:48.210311  125765 crio.go:462] duration metric: took 1.426067926s to copy over tarball
	I1028 12:46:48.210408  125765 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:46:50.813160  125765 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602716339s)
	I1028 12:46:50.813189  125765 crio.go:469] duration metric: took 2.602844176s to extract the tarball
	I1028 12:46:50.813196  125765 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:46:50.853231  125765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:46:50.891925  125765 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:46:50.891951  125765 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:46:50.892026  125765 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:50.892048  125765 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:50.892066  125765 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:50.892085  125765 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:50.892034  125765 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:50.892105  125765 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:50.892072  125765 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:46:50.892055  125765 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:50.893648  125765 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:46:50.893658  125765 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:50.893666  125765 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:50.893681  125765 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:50.893647  125765 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:50.893744  125765 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:50.893748  125765 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:50.893751  125765 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.050589  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:51.051063  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:51.052046  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:51.055863  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:51.058061  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.058402  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:46:51.129519  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:46:51.167925  125765 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:46:51.167993  125765 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:51.168047  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.176099  125765 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:46:51.176138  125765 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:51.176190  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.186357  125765 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:46:51.186393  125765 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:51.186426  125765 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:46:51.186462  125765 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:51.186503  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.186430  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.203794  125765 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:46:51.203844  125765 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.203900  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.213408  125765 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:46:51.213450  125765 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:46:51.213493  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.221561  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:51.221583  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:51.221606  125765 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:46:51.221646  125765 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:46:51.221664  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:51.221668  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:51.221685  125765 ssh_runner.go:195] Run: which crictl
	I1028 12:46:51.221744  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:51.221741  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.340586  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:51.340638  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:51.343622  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.343715  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:51.348631  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:51.348692  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:51.348747  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:51.482407  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:46:51.482416  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:46:51.482481  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:46:51.487163  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:51.487208  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:46:51.487247  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:46:51.487263  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:46:51.626336  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:46:51.626391  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:46:51.626426  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:46:51.632628  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:46:51.632702  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:46:51.632761  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:46:51.632846  125765 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:46:51.663195  125765 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:46:51.851281  125765 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:46:51.997873  125765 cache_images.go:92] duration metric: took 1.105874277s to LoadCachedImages
	W1028 12:46:51.997995  125765 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1028 12:46:51.998014  125765 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.20.0 crio true true} ...
	I1028 12:46:51.998153  125765 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-733464 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:46:51.998250  125765 ssh_runner.go:195] Run: crio config
	I1028 12:46:52.045442  125765 cni.go:84] Creating CNI manager for ""
	I1028 12:46:52.045471  125765 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:46:52.045488  125765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:46:52.045522  125765 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-733464 NodeName:old-k8s-version-733464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:46:52.045725  125765 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-733464"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:46:52.045812  125765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:46:52.055885  125765 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:46:52.055959  125765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:46:52.064875  125765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:46:52.081308  125765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:46:52.098902  125765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:46:52.116383  125765 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I1028 12:46:52.120199  125765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:46:52.131737  125765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:46:52.248319  125765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:46:52.265518  125765 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464 for IP: 192.168.39.208
	I1028 12:46:52.265551  125765 certs.go:194] generating shared ca certs ...
	I1028 12:46:52.265573  125765 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.265769  125765 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:46:52.265835  125765 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:46:52.265848  125765 certs.go:256] generating profile certs ...
	I1028 12:46:52.265924  125765 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.key
	I1028 12:46:52.265944  125765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt with IP's: []
	I1028 12:46:52.368882  125765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt ...
	I1028 12:46:52.368921  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt: {Name:mk96a05aa921aaff480d8cf930ee57f8c460f994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.369127  125765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.key ...
	I1028 12:46:52.369148  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.key: {Name:mk02283fe6d6ea9ca1729d76c4da3d459adc895d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.369257  125765 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key.56bd5639
	I1028 12:46:52.369282  125765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt.56bd5639 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208]
	I1028 12:46:52.532405  125765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt.56bd5639 ...
	I1028 12:46:52.532435  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt.56bd5639: {Name:mkbafec03d96d5800c520ce8ac339a196c2b2749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.532605  125765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key.56bd5639 ...
	I1028 12:46:52.532620  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key.56bd5639: {Name:mkd36a10ef681e9e3650e8a7196344cdaf93d775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.532694  125765 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt.56bd5639 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt
	I1028 12:46:52.532786  125765 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key.56bd5639 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key
	I1028 12:46:52.532842  125765 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key
	I1028 12:46:52.532867  125765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.crt with IP's: []
	I1028 12:46:52.617232  125765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.crt ...
	I1028 12:46:52.617269  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.crt: {Name:mk6378917fac54e21c32be4776c25e5a8fdbbe70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.617455  125765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key ...
	I1028 12:46:52.617474  125765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key: {Name:mke00a75cef37fe81f270c9351881bd347aa238c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:46:52.617682  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:46:52.617723  125765 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:46:52.617733  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:46:52.617753  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:46:52.617775  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:46:52.617795  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:46:52.617842  125765 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:46:52.618466  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:46:52.644202  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:46:52.666687  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:46:52.689925  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:46:52.712285  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:46:52.734602  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:46:52.756309  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:46:52.779044  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:46:52.801153  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:46:52.822997  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:46:52.844650  125765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:46:52.866195  125765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:46:52.880899  125765 ssh_runner.go:195] Run: openssl version
	I1028 12:46:52.886135  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:46:52.896968  125765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:46:52.901904  125765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:46:52.901958  125765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:46:52.907483  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:46:52.917218  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:46:52.926532  125765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:52.930466  125765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:52.930513  125765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:46:52.936017  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:46:52.945437  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:46:52.955477  125765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:46:52.959602  125765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:46:52.959662  125765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:46:52.964808  125765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:46:52.974636  125765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:46:52.978326  125765 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:46:52.978394  125765 kubeadm.go:392] StartCluster: {Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:46:52.978506  125765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:46:52.978556  125765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:46:53.013837  125765 cri.go:89] found id: ""
	I1028 12:46:53.013928  125765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:46:53.023019  125765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:46:53.034150  125765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:46:53.042754  125765 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:46:53.042773  125765 kubeadm.go:157] found existing configuration files:
	
	I1028 12:46:53.042813  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:46:53.050839  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:46:53.050901  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:46:53.059261  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:46:53.070767  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:46:53.070842  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:46:53.079697  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:46:53.088085  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:46:53.088151  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:46:53.096854  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:46:53.105436  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:46:53.105496  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:46:53.116267  125765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:46:53.239065  125765 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:46:53.239208  125765 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:46:53.372632  125765 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:46:53.372828  125765 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:46:53.372971  125765 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:46:53.548172  125765 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:46:53.603067  125765 out.go:235]   - Generating certificates and keys ...
	I1028 12:46:53.603202  125765 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:46:53.603289  125765 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:46:53.639890  125765 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:46:53.794827  125765 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:46:54.191331  125765 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:46:54.264657  125765 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:46:54.318677  125765 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:46:54.318917  125765 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 12:46:54.417096  125765 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:46:54.417427  125765 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	I1028 12:46:54.553946  125765 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:46:54.767542  125765 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:46:54.895513  125765 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:46:54.895729  125765 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:46:55.125341  125765 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:46:55.246268  125765 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:46:55.482830  125765 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:46:55.843501  125765 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:46:55.863431  125765 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:46:55.863595  125765 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:46:55.863696  125765 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:46:55.995145  125765 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:46:55.996914  125765 out.go:235]   - Booting up control plane ...
	I1028 12:46:55.997010  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:46:56.007661  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:46:56.008655  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:46:56.009431  125765 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:46:56.013265  125765 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:47:35.983964  125765 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:47:35.984263  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:47:35.984550  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:47:40.983421  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:47:40.983742  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:47:50.982221  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:47:50.982524  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:10.983058  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:48:10.983348  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:50.981625  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:48:50.981865  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:48:50.981881  125765 kubeadm.go:310] 
	I1028 12:48:50.981936  125765 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:48:50.981973  125765 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:48:50.981985  125765 kubeadm.go:310] 
	I1028 12:48:50.982043  125765 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:48:50.982082  125765 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:48:50.982270  125765 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:48:50.982300  125765 kubeadm.go:310] 
	I1028 12:48:50.982408  125765 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:48:50.982441  125765 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:48:50.982476  125765 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:48:50.982482  125765 kubeadm.go:310] 
	I1028 12:48:50.982573  125765 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:48:50.982653  125765 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:48:50.982661  125765 kubeadm.go:310] 
	I1028 12:48:50.982812  125765 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:48:50.982948  125765 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:48:50.983038  125765 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:48:50.983149  125765 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:48:50.983184  125765 kubeadm.go:310] 
	I1028 12:48:50.983315  125765 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:48:50.983387  125765 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:48:50.983491  125765 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 12:48:50.983649  125765 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-733464] and IPs [192.168.39.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:48:50.983711  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:48:52.028843  125765 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.045103015s)
	I1028 12:48:52.028938  125765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:48:52.042618  125765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:48:52.052180  125765 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:48:52.052227  125765 kubeadm.go:157] found existing configuration files:
	
	I1028 12:48:52.052284  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:48:52.060833  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:48:52.060888  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:48:52.069679  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:48:52.078995  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:48:52.079050  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:48:52.088360  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:48:52.096962  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:48:52.097026  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:48:52.105279  125765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:48:52.113272  125765 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:48:52.113327  125765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:48:52.121844  125765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:48:52.190360  125765 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:48:52.190483  125765 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:48:52.333454  125765 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:48:52.333612  125765 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:48:52.333734  125765 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:48:52.526590  125765 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:48:52.529801  125765 out.go:235]   - Generating certificates and keys ...
	I1028 12:48:52.529919  125765 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:48:52.530024  125765 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:48:52.530131  125765 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:48:52.530231  125765 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:48:52.530338  125765 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:48:52.530413  125765 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:48:52.530468  125765 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:48:52.530603  125765 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:48:52.530729  125765 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:48:52.530847  125765 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:48:52.530909  125765 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:48:52.531004  125765 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:48:52.697604  125765 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:48:52.900908  125765 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:48:53.398442  125765 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:48:53.496571  125765 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:48:53.514689  125765 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:48:53.515813  125765 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:48:53.515885  125765 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:48:53.658583  125765 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:48:53.660097  125765 out.go:235]   - Booting up control plane ...
	I1028 12:48:53.660233  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:48:53.672305  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:48:53.672641  125765 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:48:53.673608  125765 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:48:53.676694  125765 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:49:33.678758  125765 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:49:33.679109  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:49:33.679300  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:49:38.679652  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:49:38.679878  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:49:48.680601  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:49:48.680844  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:50:08.682586  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:50:08.682829  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:50:48.681655  125765 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:50:48.682241  125765 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:50:48.682282  125765 kubeadm.go:310] 
	I1028 12:50:48.682388  125765 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:50:48.682474  125765 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:50:48.682510  125765 kubeadm.go:310] 
	I1028 12:50:48.682565  125765 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:50:48.682618  125765 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:50:48.682791  125765 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:50:48.682813  125765 kubeadm.go:310] 
	I1028 12:50:48.682972  125765 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:50:48.683016  125765 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:50:48.683064  125765 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:50:48.683074  125765 kubeadm.go:310] 
	I1028 12:50:48.683250  125765 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:50:48.683367  125765 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:50:48.683399  125765 kubeadm.go:310] 
	I1028 12:50:48.683521  125765 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:50:48.683647  125765 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:50:48.683770  125765 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:50:48.683866  125765 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:50:48.683901  125765 kubeadm.go:310] 
	I1028 12:50:48.684031  125765 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:50:48.684171  125765 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:50:48.684273  125765 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:50:48.684360  125765 kubeadm.go:394] duration metric: took 3m55.705970527s to StartCluster
	I1028 12:50:48.684410  125765 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:50:48.684478  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:50:48.733269  125765 cri.go:89] found id: ""
	I1028 12:50:48.733300  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.733311  125765 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:50:48.733320  125765 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:50:48.733393  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:50:48.768975  125765 cri.go:89] found id: ""
	I1028 12:50:48.769010  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.769021  125765 logs.go:284] No container was found matching "etcd"
	I1028 12:50:48.769030  125765 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:50:48.769096  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:50:48.803342  125765 cri.go:89] found id: ""
	I1028 12:50:48.803376  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.803385  125765 logs.go:284] No container was found matching "coredns"
	I1028 12:50:48.803395  125765 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:50:48.803468  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:50:48.839795  125765 cri.go:89] found id: ""
	I1028 12:50:48.839830  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.839841  125765 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:50:48.839856  125765 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:50:48.839910  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:50:48.877063  125765 cri.go:89] found id: ""
	I1028 12:50:48.877095  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.877107  125765 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:50:48.877117  125765 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:50:48.877171  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:50:48.907834  125765 cri.go:89] found id: ""
	I1028 12:50:48.907861  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.907870  125765 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:50:48.907879  125765 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:50:48.907941  125765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:50:48.943719  125765 cri.go:89] found id: ""
	I1028 12:50:48.943745  125765 logs.go:282] 0 containers: []
	W1028 12:50:48.943752  125765 logs.go:284] No container was found matching "kindnet"
	I1028 12:50:48.943764  125765 logs.go:123] Gathering logs for dmesg ...
	I1028 12:50:48.943781  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:50:48.956140  125765 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:50:48.956173  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:50:49.063044  125765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:50:49.063070  125765 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:50:49.063087  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:50:49.165335  125765 logs.go:123] Gathering logs for container status ...
	I1028 12:50:49.165371  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:50:49.221968  125765 logs.go:123] Gathering logs for kubelet ...
	I1028 12:50:49.222007  125765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:50:49.284152  125765 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:50:49.284215  125765 out.go:270] * 
	* 
	W1028 12:50:49.284278  125765 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:50:49.284292  125765 out.go:270] * 
	* 
	W1028 12:50:49.285260  125765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:50:49.287976  125765 out.go:201] 
	W1028 12:50:49.289156  125765 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:50:49.289203  125765 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:50:49.289231  125765 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:50:49.290758  125765 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 6 (223.45943ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:50:49.554430  128520 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-733464" does not appear in /home/jenkins/minikube-integration/19875-77800/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-733464" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (292.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-818470 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-818470 --alsologtostderr -v=3: exit status 82 (2m0.518582758s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-818470"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:49:13.333584  127856 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:49:13.333815  127856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:49:13.333823  127856 out.go:358] Setting ErrFile to fd 2...
	I1028 12:49:13.333827  127856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:49:13.333992  127856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:49:13.334197  127856 out.go:352] Setting JSON to false
	I1028 12:49:13.334265  127856 mustload.go:65] Loading cluster: embed-certs-818470
	I1028 12:49:13.334612  127856 config.go:182] Loaded profile config "embed-certs-818470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:49:13.334686  127856 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/embed-certs-818470/config.json ...
	I1028 12:49:13.334852  127856 mustload.go:65] Loading cluster: embed-certs-818470
	I1028 12:49:13.334950  127856 config.go:182] Loaded profile config "embed-certs-818470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:49:13.334982  127856 stop.go:39] StopHost: embed-certs-818470
	I1028 12:49:13.335329  127856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:49:13.335377  127856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:49:13.350929  127856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I1028 12:49:13.351502  127856 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:49:13.352117  127856 main.go:141] libmachine: Using API Version  1
	I1028 12:49:13.352152  127856 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:49:13.352543  127856 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:49:13.354895  127856 out.go:177] * Stopping node "embed-certs-818470"  ...
	I1028 12:49:13.356344  127856 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:49:13.356372  127856 main.go:141] libmachine: (embed-certs-818470) Calling .DriverName
	I1028 12:49:13.356604  127856 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:49:13.356640  127856 main.go:141] libmachine: (embed-certs-818470) Calling .GetSSHHostname
	I1028 12:49:13.359604  127856 main.go:141] libmachine: (embed-certs-818470) DBG | domain embed-certs-818470 has defined MAC address 52:54:00:0e:d5:d3 in network mk-embed-certs-818470
	I1028 12:49:13.360064  127856 main.go:141] libmachine: (embed-certs-818470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d5:d3", ip: ""} in network mk-embed-certs-818470: {Iface:virbr2 ExpiryTime:2024-10-28 13:48:21 +0000 UTC Type:0 Mac:52:54:00:0e:d5:d3 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-818470 Clientid:01:52:54:00:0e:d5:d3}
	I1028 12:49:13.360094  127856 main.go:141] libmachine: (embed-certs-818470) DBG | domain embed-certs-818470 has defined IP address 192.168.50.164 and MAC address 52:54:00:0e:d5:d3 in network mk-embed-certs-818470
	I1028 12:49:13.360258  127856 main.go:141] libmachine: (embed-certs-818470) Calling .GetSSHPort
	I1028 12:49:13.360424  127856 main.go:141] libmachine: (embed-certs-818470) Calling .GetSSHKeyPath
	I1028 12:49:13.360575  127856 main.go:141] libmachine: (embed-certs-818470) Calling .GetSSHUsername
	I1028 12:49:13.360736  127856 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/embed-certs-818470/id_rsa Username:docker}
	I1028 12:49:13.462107  127856 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:49:13.540671  127856 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:49:13.606176  127856 main.go:141] libmachine: Stopping "embed-certs-818470"...
	I1028 12:49:13.606205  127856 main.go:141] libmachine: (embed-certs-818470) Calling .GetState
	I1028 12:49:13.608564  127856 main.go:141] libmachine: (embed-certs-818470) Calling .Stop
	I1028 12:49:13.612326  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 0/120
	I1028 12:49:14.614210  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 1/120
	I1028 12:49:15.615498  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 2/120
	I1028 12:49:16.616837  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 3/120
	I1028 12:49:17.618048  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 4/120
	I1028 12:49:18.620053  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 5/120
	I1028 12:49:19.622176  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 6/120
	I1028 12:49:20.623402  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 7/120
	I1028 12:49:21.625610  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 8/120
	I1028 12:49:22.626763  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 9/120
	I1028 12:49:23.628954  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 10/120
	I1028 12:49:24.630381  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 11/120
	I1028 12:49:25.631677  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 12/120
	I1028 12:49:26.632997  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 13/120
	I1028 12:49:27.634175  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 14/120
	I1028 12:49:28.636311  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 15/120
	I1028 12:49:29.637640  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 16/120
	I1028 12:49:30.638874  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 17/120
	I1028 12:49:31.640238  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 18/120
	I1028 12:49:32.642180  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 19/120
	I1028 12:49:33.644209  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 20/120
	I1028 12:49:34.645714  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 21/120
	I1028 12:49:35.647078  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 22/120
	I1028 12:49:36.648467  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 23/120
	I1028 12:49:37.649718  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 24/120
	I1028 12:49:38.651653  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 25/120
	I1028 12:49:39.652953  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 26/120
	I1028 12:49:40.654413  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 27/120
	I1028 12:49:41.655761  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 28/120
	I1028 12:49:42.656959  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 29/120
	I1028 12:49:43.658948  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 30/120
	I1028 12:49:44.660244  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 31/120
	I1028 12:49:45.661663  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 32/120
	I1028 12:49:46.663003  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 33/120
	I1028 12:49:47.664441  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 34/120
	I1028 12:49:48.666364  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 35/120
	I1028 12:49:49.667732  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 36/120
	I1028 12:49:50.668977  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 37/120
	I1028 12:49:51.670677  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 38/120
	I1028 12:49:52.672089  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 39/120
	I1028 12:49:53.673519  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 40/120
	I1028 12:49:54.674847  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 41/120
	I1028 12:49:55.676384  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 42/120
	I1028 12:49:56.677850  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 43/120
	I1028 12:49:57.679336  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 44/120
	I1028 12:49:58.681324  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 45/120
	I1028 12:49:59.682647  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 46/120
	I1028 12:50:00.684237  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 47/120
	I1028 12:50:01.685569  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 48/120
	I1028 12:50:02.687043  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 49/120
	I1028 12:50:03.689068  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 50/120
	I1028 12:50:04.690504  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 51/120
	I1028 12:50:05.691923  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 52/120
	I1028 12:50:06.694138  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 53/120
	I1028 12:50:07.695573  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 54/120
	I1028 12:50:08.697582  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 55/120
	I1028 12:50:09.698888  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 56/120
	I1028 12:50:10.700281  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 57/120
	I1028 12:50:11.701694  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 58/120
	I1028 12:50:12.702985  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 59/120
	I1028 12:50:13.704931  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 60/120
	I1028 12:50:14.707091  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 61/120
	I1028 12:50:15.708571  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 62/120
	I1028 12:50:16.710320  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 63/120
	I1028 12:50:17.711648  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 64/120
	I1028 12:50:18.713060  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 65/120
	I1028 12:50:19.714439  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 66/120
	I1028 12:50:20.715850  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 67/120
	I1028 12:50:21.718149  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 68/120
	I1028 12:50:22.720060  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 69/120
	I1028 12:50:23.721924  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 70/120
	I1028 12:50:24.723500  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 71/120
	I1028 12:50:25.725124  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 72/120
	I1028 12:50:26.726430  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 73/120
	I1028 12:50:27.727853  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 74/120
	I1028 12:50:28.729865  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 75/120
	I1028 12:50:29.731102  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 76/120
	I1028 12:50:30.732337  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 77/120
	I1028 12:50:31.733616  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 78/120
	I1028 12:50:32.734995  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 79/120
	I1028 12:50:33.737246  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 80/120
	I1028 12:50:34.738568  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 81/120
	I1028 12:50:35.740002  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 82/120
	I1028 12:50:36.742325  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 83/120
	I1028 12:50:37.743957  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 84/120
	I1028 12:50:38.745971  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 85/120
	I1028 12:50:39.747081  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 86/120
	I1028 12:50:40.748527  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 87/120
	I1028 12:50:41.749719  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 88/120
	I1028 12:50:42.751040  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 89/120
	I1028 12:50:43.752960  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 90/120
	I1028 12:50:44.755246  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 91/120
	I1028 12:50:45.756883  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 92/120
	I1028 12:50:46.758364  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 93/120
	I1028 12:50:47.759817  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 94/120
	I1028 12:50:48.762261  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 95/120
	I1028 12:50:49.763541  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 96/120
	I1028 12:50:50.764896  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 97/120
	I1028 12:50:51.766233  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 98/120
	I1028 12:50:52.767616  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 99/120
	I1028 12:50:53.769721  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 100/120
	I1028 12:50:54.771071  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 101/120
	I1028 12:50:55.772482  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 102/120
	I1028 12:50:56.774118  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 103/120
	I1028 12:50:57.775675  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 104/120
	I1028 12:50:58.777741  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 105/120
	I1028 12:50:59.778959  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 106/120
	I1028 12:51:00.780368  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 107/120
	I1028 12:51:01.781702  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 108/120
	I1028 12:51:02.783166  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 109/120
	I1028 12:51:03.785160  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 110/120
	I1028 12:51:04.786491  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 111/120
	I1028 12:51:05.787897  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 112/120
	I1028 12:51:06.789352  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 113/120
	I1028 12:51:07.790650  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 114/120
	I1028 12:51:08.792622  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 115/120
	I1028 12:51:09.793883  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 116/120
	I1028 12:51:10.795103  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 117/120
	I1028 12:51:11.796517  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 118/120
	I1028 12:51:12.797908  127856 main.go:141] libmachine: (embed-certs-818470) Waiting for machine to stop 119/120
	I1028 12:51:13.799236  127856 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:51:13.799292  127856 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:51:13.801106  127856 out.go:201] 
	W1028 12:51:13.802370  127856 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:51:13.802382  127856 out.go:270] * 
	* 
	W1028 12:51:13.805572  127856 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:51:13.806899  127856 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-818470 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470: exit status 3 (18.607795648s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:32.416050  128846 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1028 12:51:32.416070  128846 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-818470" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-702694 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-702694 --alsologtostderr -v=3: exit status 82 (2m0.479769227s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-702694"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:49:30.485838  128009 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:49:30.485958  128009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:49:30.485967  128009 out.go:358] Setting ErrFile to fd 2...
	I1028 12:49:30.485971  128009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:49:30.486164  128009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:49:30.486400  128009 out.go:352] Setting JSON to false
	I1028 12:49:30.486473  128009 mustload.go:65] Loading cluster: no-preload-702694
	I1028 12:49:30.486835  128009 config.go:182] Loaded profile config "no-preload-702694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:49:30.486900  128009 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/config.json ...
	I1028 12:49:30.487059  128009 mustload.go:65] Loading cluster: no-preload-702694
	I1028 12:49:30.487155  128009 config.go:182] Loaded profile config "no-preload-702694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:49:30.487179  128009 stop.go:39] StopHost: no-preload-702694
	I1028 12:49:30.487505  128009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:49:30.487561  128009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:49:30.502465  128009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I1028 12:49:30.503015  128009 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:49:30.503589  128009 main.go:141] libmachine: Using API Version  1
	I1028 12:49:30.503611  128009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:49:30.504080  128009 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:49:30.506535  128009 out.go:177] * Stopping node "no-preload-702694"  ...
	I1028 12:49:30.507768  128009 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:49:30.507812  128009 main.go:141] libmachine: (no-preload-702694) Calling .DriverName
	I1028 12:49:30.508034  128009 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:49:30.508067  128009 main.go:141] libmachine: (no-preload-702694) Calling .GetSSHHostname
	I1028 12:49:30.510729  128009 main.go:141] libmachine: (no-preload-702694) DBG | domain no-preload-702694 has defined MAC address 52:54:00:12:c4:46 in network mk-no-preload-702694
	I1028 12:49:30.511154  128009 main.go:141] libmachine: (no-preload-702694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c4:46", ip: ""} in network mk-no-preload-702694: {Iface:virbr1 ExpiryTime:2024-10-28 13:47:57 +0000 UTC Type:0 Mac:52:54:00:12:c4:46 Iaid: IPaddr:192.168.72.192 Prefix:24 Hostname:no-preload-702694 Clientid:01:52:54:00:12:c4:46}
	I1028 12:49:30.511184  128009 main.go:141] libmachine: (no-preload-702694) DBG | domain no-preload-702694 has defined IP address 192.168.72.192 and MAC address 52:54:00:12:c4:46 in network mk-no-preload-702694
	I1028 12:49:30.511308  128009 main.go:141] libmachine: (no-preload-702694) Calling .GetSSHPort
	I1028 12:49:30.511471  128009 main.go:141] libmachine: (no-preload-702694) Calling .GetSSHKeyPath
	I1028 12:49:30.511681  128009 main.go:141] libmachine: (no-preload-702694) Calling .GetSSHUsername
	I1028 12:49:30.511834  128009 sshutil.go:53] new ssh client: &{IP:192.168.72.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/no-preload-702694/id_rsa Username:docker}
	I1028 12:49:30.602921  128009 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:49:30.660885  128009 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:49:30.716618  128009 main.go:141] libmachine: Stopping "no-preload-702694"...
	I1028 12:49:30.716673  128009 main.go:141] libmachine: (no-preload-702694) Calling .GetState
	I1028 12:49:30.718292  128009 main.go:141] libmachine: (no-preload-702694) Calling .Stop
	I1028 12:49:30.721692  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 0/120
	I1028 12:49:31.722970  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 1/120
	I1028 12:49:32.724306  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 2/120
	I1028 12:49:33.725685  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 3/120
	I1028 12:49:34.727167  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 4/120
	I1028 12:49:35.729307  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 5/120
	I1028 12:49:36.730682  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 6/120
	I1028 12:49:37.732196  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 7/120
	I1028 12:49:38.733466  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 8/120
	I1028 12:49:39.734885  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 9/120
	I1028 12:49:40.736506  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 10/120
	I1028 12:49:41.737879  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 11/120
	I1028 12:49:42.739185  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 12/120
	I1028 12:49:43.740511  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 13/120
	I1028 12:49:44.742047  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 14/120
	I1028 12:49:45.744000  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 15/120
	I1028 12:49:46.745207  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 16/120
	I1028 12:49:47.746597  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 17/120
	I1028 12:49:48.747830  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 18/120
	I1028 12:49:49.749474  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 19/120
	I1028 12:49:50.751722  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 20/120
	I1028 12:49:51.753349  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 21/120
	I1028 12:49:52.754585  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 22/120
	I1028 12:49:53.755943  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 23/120
	I1028 12:49:54.757260  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 24/120
	I1028 12:49:55.759197  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 25/120
	I1028 12:49:56.760524  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 26/120
	I1028 12:49:57.762297  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 27/120
	I1028 12:49:58.763745  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 28/120
	I1028 12:49:59.764987  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 29/120
	I1028 12:50:00.767453  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 30/120
	I1028 12:50:01.768813  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 31/120
	I1028 12:50:02.770283  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 32/120
	I1028 12:50:03.771530  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 33/120
	I1028 12:50:04.772996  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 34/120
	I1028 12:50:05.775126  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 35/120
	I1028 12:50:06.776560  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 36/120
	I1028 12:50:07.778177  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 37/120
	I1028 12:50:08.779515  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 38/120
	I1028 12:50:09.780702  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 39/120
	I1028 12:50:10.782743  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 40/120
	I1028 12:50:11.784131  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 41/120
	I1028 12:50:12.786028  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 42/120
	I1028 12:50:13.787318  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 43/120
	I1028 12:50:14.788857  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 44/120
	I1028 12:50:15.790632  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 45/120
	I1028 12:50:16.792011  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 46/120
	I1028 12:50:17.793471  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 47/120
	I1028 12:50:18.794631  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 48/120
	I1028 12:50:19.796043  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 49/120
	I1028 12:50:20.797596  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 50/120
	I1028 12:50:21.798788  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 51/120
	I1028 12:50:22.800361  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 52/120
	I1028 12:50:23.801764  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 53/120
	I1028 12:50:24.803830  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 54/120
	I1028 12:50:25.805705  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 55/120
	I1028 12:50:26.807032  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 56/120
	I1028 12:50:27.808379  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 57/120
	I1028 12:50:28.809769  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 58/120
	I1028 12:50:29.810999  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 59/120
	I1028 12:50:30.813008  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 60/120
	I1028 12:50:31.814244  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 61/120
	I1028 12:50:32.815528  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 62/120
	I1028 12:50:33.816572  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 63/120
	I1028 12:50:34.818291  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 64/120
	I1028 12:50:35.820249  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 65/120
	I1028 12:50:36.822346  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 66/120
	I1028 12:50:37.823854  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 67/120
	I1028 12:50:38.825236  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 68/120
	I1028 12:50:39.826672  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 69/120
	I1028 12:50:40.828786  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 70/120
	I1028 12:50:41.830255  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 71/120
	I1028 12:50:42.831475  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 72/120
	I1028 12:50:43.832567  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 73/120
	I1028 12:50:44.834023  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 74/120
	I1028 12:50:45.836132  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 75/120
	I1028 12:50:46.837498  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 76/120
	I1028 12:50:47.838842  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 77/120
	I1028 12:50:48.840254  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 78/120
	I1028 12:50:49.842010  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 79/120
	I1028 12:50:50.843915  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 80/120
	I1028 12:50:51.845292  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 81/120
	I1028 12:50:52.846656  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 82/120
	I1028 12:50:53.848062  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 83/120
	I1028 12:50:54.850304  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 84/120
	I1028 12:50:55.852607  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 85/120
	I1028 12:50:56.854004  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 86/120
	I1028 12:50:57.855409  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 87/120
	I1028 12:50:58.856645  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 88/120
	I1028 12:50:59.857867  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 89/120
	I1028 12:51:00.859778  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 90/120
	I1028 12:51:01.861204  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 91/120
	I1028 12:51:02.862557  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 92/120
	I1028 12:51:03.863951  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 93/120
	I1028 12:51:04.865558  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 94/120
	I1028 12:51:05.867346  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 95/120
	I1028 12:51:06.868708  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 96/120
	I1028 12:51:07.870017  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 97/120
	I1028 12:51:08.871275  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 98/120
	I1028 12:51:09.872546  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 99/120
	I1028 12:51:10.874709  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 100/120
	I1028 12:51:11.876580  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 101/120
	I1028 12:51:12.877879  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 102/120
	I1028 12:51:13.879228  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 103/120
	I1028 12:51:14.880528  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 104/120
	I1028 12:51:15.882333  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 105/120
	I1028 12:51:16.883694  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 106/120
	I1028 12:51:17.885232  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 107/120
	I1028 12:51:18.886416  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 108/120
	I1028 12:51:19.887776  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 109/120
	I1028 12:51:20.889927  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 110/120
	I1028 12:51:21.891176  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 111/120
	I1028 12:51:22.892427  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 112/120
	I1028 12:51:23.894352  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 113/120
	I1028 12:51:24.895707  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 114/120
	I1028 12:51:25.897714  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 115/120
	I1028 12:51:26.899012  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 116/120
	I1028 12:51:27.900342  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 117/120
	I1028 12:51:28.902256  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 118/120
	I1028 12:51:29.903553  128009 main.go:141] libmachine: (no-preload-702694) Waiting for machine to stop 119/120
	I1028 12:51:30.904134  128009 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:51:30.904204  128009 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:51:30.906354  128009 out.go:201] 
	W1028 12:51:30.907614  128009 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:51:30.907643  128009 out.go:270] * 
	* 
	W1028 12:51:30.910723  128009 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:51:30.912184  128009 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-702694 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694: exit status 3 (18.654071398s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:49.567988  128941 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host
	E1028 12:51:49.568009  128941 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-702694" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-733464 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-733464 create -f testdata/busybox.yaml: exit status 1 (46.527984ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-733464" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-733464 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 6 (226.569868ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:50:49.826963  128560 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-733464" does not appear in /home/jenkins/minikube-integration/19875-77800/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-733464" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 6 (211.720095ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:50:50.041004  128589 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-733464" does not appear in /home/jenkins/minikube-integration/19875-77800/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-733464" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-733464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-733464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m32.834461066s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-733464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-733464 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-733464 describe deploy/metrics-server -n kube-system: exit status 1 (42.700316ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-733464" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-733464 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 6 (214.45815ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:52:23.132750  129396 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-733464" does not appear in /home/jenkins/minikube-integration/19875-77800/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-733464" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470: exit status 3 (3.167781649s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:35.583923  128971 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1028 12:51:35.583946  128971 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-818470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-818470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153615256s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-818470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470: exit status 3 (3.062236229s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:44.799996  129052 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1028 12:51:44.800019  129052 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-818470" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694: exit status 3 (3.167698961s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:51:52.736012  129156 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host
	E1028 12:51:52.736033  129156 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-702694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-702694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153046575s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-702694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694: exit status 3 (3.062536285s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:52:01.951950  129219 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host
	E1028 12:52:01.951991  129219 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.192:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-702694" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (761.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1028 12:54:20.375913   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:55:43.447070   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:57:13.449008   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:59:20.376718   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m40.00738157s)

                                                
                                                
-- stdout --
	* [old-k8s-version-733464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-733464" primary control-plane node in "old-k8s-version-733464" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-733464" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:52:27.656838  129528 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:52:27.656950  129528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:52:27.656959  129528 out.go:358] Setting ErrFile to fd 2...
	I1028 12:52:27.656963  129528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:52:27.657136  129528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:52:27.657741  129528 out.go:352] Setting JSON to false
	I1028 12:52:27.658727  129528 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9298,"bootTime":1730110650,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:52:27.658827  129528 start.go:139] virtualization: kvm guest
	I1028 12:52:27.661911  129528 out.go:177] * [old-k8s-version-733464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:52:27.663379  129528 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:52:27.663430  129528 notify.go:220] Checking for updates...
	I1028 12:52:27.666327  129528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:52:27.667617  129528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:52:27.668894  129528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:52:27.670255  129528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:52:27.671486  129528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:52:27.673037  129528 config.go:182] Loaded profile config "old-k8s-version-733464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:52:27.673442  129528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:52:27.673486  129528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:52:27.688471  129528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I1028 12:52:27.688951  129528 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:52:27.689585  129528 main.go:141] libmachine: Using API Version  1
	I1028 12:52:27.689612  129528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:52:27.689962  129528 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:52:27.690135  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:52:27.691892  129528 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:52:27.693082  129528 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:52:27.693409  129528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:52:27.693451  129528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:52:27.707942  129528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I1028 12:52:27.708363  129528 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:52:27.708825  129528 main.go:141] libmachine: Using API Version  1
	I1028 12:52:27.708851  129528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:52:27.709165  129528 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:52:27.709370  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:52:27.743741  129528 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:52:27.744974  129528 start.go:297] selected driver: kvm2
	I1028 12:52:27.744986  129528 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:27.745097  129528 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:52:27.745798  129528 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:52:27.745895  129528 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:52:27.760100  129528 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:52:27.760465  129528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:52:27.760495  129528 cni.go:84] Creating CNI manager for ""
	I1028 12:52:27.760542  129528 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:52:27.760583  129528 start.go:340] cluster config:
	{Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:52:27.760681  129528 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:52:27.762428  129528 out.go:177] * Starting "old-k8s-version-733464" primary control-plane node in "old-k8s-version-733464" cluster
	I1028 12:52:27.763615  129528 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:52:27.763664  129528 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:52:27.763676  129528 cache.go:56] Caching tarball of preloaded images
	I1028 12:52:27.763762  129528 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:52:27.763785  129528 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:52:27.763881  129528 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/config.json ...
	I1028 12:52:27.764049  129528 start.go:360] acquireMachinesLock for old-k8s-version-733464: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:56:42.711982  129528 start.go:364] duration metric: took 4m14.947885829s to acquireMachinesLock for "old-k8s-version-733464"
	I1028 12:56:42.712058  129528 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:56:42.712070  129528 fix.go:54] fixHost starting: 
	I1028 12:56:42.712542  129528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:56:42.712614  129528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:56:42.729709  129528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34621
	I1028 12:56:42.730197  129528 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:56:42.730653  129528 main.go:141] libmachine: Using API Version  1
	I1028 12:56:42.730678  129528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:56:42.731053  129528 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:56:42.731244  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:56:42.731395  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetState
	I1028 12:56:42.733140  129528 fix.go:112] recreateIfNeeded on old-k8s-version-733464: state=Stopped err=<nil>
	I1028 12:56:42.733168  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	W1028 12:56:42.733313  129528 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:56:42.735298  129528 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-733464" ...
	I1028 12:56:42.736688  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .Start
	I1028 12:56:42.736858  129528 main.go:141] libmachine: (old-k8s-version-733464) Ensuring networks are active...
	I1028 12:56:42.737640  129528 main.go:141] libmachine: (old-k8s-version-733464) Ensuring network default is active
	I1028 12:56:42.738031  129528 main.go:141] libmachine: (old-k8s-version-733464) Ensuring network mk-old-k8s-version-733464 is active
	I1028 12:56:42.738486  129528 main.go:141] libmachine: (old-k8s-version-733464) Getting domain xml...
	I1028 12:56:42.739218  129528 main.go:141] libmachine: (old-k8s-version-733464) Creating domain...
	I1028 12:56:43.991482  129528 main.go:141] libmachine: (old-k8s-version-733464) Waiting to get IP...
	I1028 12:56:43.992553  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:43.993055  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:43.993153  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:43.993043  130599 retry.go:31] will retry after 275.654094ms: waiting for machine to come up
	I1028 12:56:44.270508  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:44.271096  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:44.271137  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:44.271025  130599 retry.go:31] will retry after 325.303753ms: waiting for machine to come up
	I1028 12:56:44.597622  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:44.598286  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:44.598317  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:44.598207  130599 retry.go:31] will retry after 338.082528ms: waiting for machine to come up
	I1028 12:56:44.937466  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:44.937899  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:44.937923  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:44.937857  130599 retry.go:31] will retry after 506.832734ms: waiting for machine to come up
	I1028 12:56:45.446704  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:45.447328  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:45.447358  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:45.447268  130599 retry.go:31] will retry after 623.585102ms: waiting for machine to come up
	I1028 12:56:46.072448  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:46.072981  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:46.073011  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:46.072918  130599 retry.go:31] will retry after 804.227761ms: waiting for machine to come up
	I1028 12:56:46.878544  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:46.878981  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:46.879006  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:46.878931  130599 retry.go:31] will retry after 973.458487ms: waiting for machine to come up
	I1028 12:56:47.854003  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:47.854465  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:47.854495  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:47.854406  130599 retry.go:31] will retry after 1.2728995s: waiting for machine to come up
	I1028 12:56:49.128974  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:49.129586  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:49.129612  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:49.129540  130599 retry.go:31] will retry after 1.457952239s: waiting for machine to come up
	I1028 12:56:50.589112  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:50.589597  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:50.589620  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:50.589531  130599 retry.go:31] will retry after 1.504046021s: waiting for machine to come up
	I1028 12:56:52.095089  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:52.095689  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:52.095717  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:52.095613  130599 retry.go:31] will retry after 2.247105354s: waiting for machine to come up
	I1028 12:56:54.344454  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:54.344877  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:54.344907  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:54.344823  130599 retry.go:31] will retry after 2.642721547s: waiting for machine to come up
	I1028 12:56:56.990789  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:56:56.991349  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | unable to find current IP address of domain old-k8s-version-733464 in network mk-old-k8s-version-733464
	I1028 12:56:56.991383  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | I1028 12:56:56.991300  130599 retry.go:31] will retry after 3.851811711s: waiting for machine to come up
	I1028 12:57:00.844233  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.844706  129528 main.go:141] libmachine: (old-k8s-version-733464) Found IP for machine: 192.168.39.208
	I1028 12:57:00.844735  129528 main.go:141] libmachine: (old-k8s-version-733464) Reserving static IP address...
	I1028 12:57:00.844748  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has current primary IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.845141  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "old-k8s-version-733464", mac: "52:54:00:cf:6c:2d", ip: "192.168.39.208"} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:00.845167  129528 main.go:141] libmachine: (old-k8s-version-733464) Reserved static IP address: 192.168.39.208
	I1028 12:57:00.845186  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | skip adding static IP to network mk-old-k8s-version-733464 - found existing host DHCP lease matching {name: "old-k8s-version-733464", mac: "52:54:00:cf:6c:2d", ip: "192.168.39.208"}
	I1028 12:57:00.845209  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | Getting to WaitForSSH function...
	I1028 12:57:00.845220  129528 main.go:141] libmachine: (old-k8s-version-733464) Waiting for SSH to be available...
	I1028 12:57:00.847393  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.847656  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:00.847689  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.847806  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | Using SSH client type: external
	I1028 12:57:00.847831  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa (-rw-------)
	I1028 12:57:00.847873  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:57:00.847887  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | About to run SSH command:
	I1028 12:57:00.847902  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | exit 0
	I1028 12:57:00.967215  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | SSH cmd err, output: <nil>: 
	I1028 12:57:00.967556  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetConfigRaw
	I1028 12:57:00.968262  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:57:00.970867  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.971210  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:00.971245  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.971482  129528 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/config.json ...
	I1028 12:57:00.971743  129528 machine.go:93] provisionDockerMachine start ...
	I1028 12:57:00.971768  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:00.971995  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:00.974288  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.974655  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:00.974687  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:00.974787  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:00.974949  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:00.975129  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:00.975290  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:00.975439  129528 main.go:141] libmachine: Using SSH client type: native
	I1028 12:57:00.975651  129528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:57:00.975665  129528 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:57:01.075892  129528 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:57:01.075944  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:57:01.076245  129528 buildroot.go:166] provisioning hostname "old-k8s-version-733464"
	I1028 12:57:01.076278  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:57:01.076478  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.079588  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.080006  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.080040  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.080233  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.080452  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.080672  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.080896  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.081104  129528 main.go:141] libmachine: Using SSH client type: native
	I1028 12:57:01.081305  129528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:57:01.081322  129528 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-733464 && echo "old-k8s-version-733464" | sudo tee /etc/hostname
	I1028 12:57:01.199105  129528 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-733464
	
	I1028 12:57:01.199139  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.202479  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.202870  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.202922  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.203042  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.203265  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.203435  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.203617  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.203797  129528 main.go:141] libmachine: Using SSH client type: native
	I1028 12:57:01.204020  129528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:57:01.204045  129528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-733464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-733464/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-733464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:57:01.317988  129528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:57:01.318034  129528 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 12:57:01.318077  129528 buildroot.go:174] setting up certificates
	I1028 12:57:01.318100  129528 provision.go:84] configureAuth start
	I1028 12:57:01.318117  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetMachineName
	I1028 12:57:01.318447  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:57:01.321322  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.321674  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.321722  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.321862  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.324666  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.325221  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.325253  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.325456  129528 provision.go:143] copyHostCerts
	I1028 12:57:01.325544  129528 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 12:57:01.325569  129528 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 12:57:01.325644  129528 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 12:57:01.325775  129528 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 12:57:01.325789  129528 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 12:57:01.325825  129528 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 12:57:01.325912  129528 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 12:57:01.325923  129528 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 12:57:01.325950  129528 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 12:57:01.326025  129528 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-733464 san=[127.0.0.1 192.168.39.208 localhost minikube old-k8s-version-733464]
	I1028 12:57:01.427544  129528 provision.go:177] copyRemoteCerts
	I1028 12:57:01.427602  129528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:57:01.427652  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.430694  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.431159  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.431196  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.431363  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.431601  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.431911  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.432096  129528 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:57:01.521385  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:57:01.551066  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:57:01.583119  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 12:57:01.613721  129528 provision.go:87] duration metric: took 295.604065ms to configureAuth
	I1028 12:57:01.613757  129528 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:57:01.614009  129528 config.go:182] Loaded profile config "old-k8s-version-733464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:57:01.614128  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.617276  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.617779  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.617817  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.617966  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.618281  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.618461  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.618624  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.618828  129528 main.go:141] libmachine: Using SSH client type: native
	I1028 12:57:01.619035  129528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:57:01.619056  129528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:57:01.846161  129528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:57:01.846192  129528 machine.go:96] duration metric: took 874.43293ms to provisionDockerMachine
	I1028 12:57:01.846207  129528 start.go:293] postStartSetup for "old-k8s-version-733464" (driver="kvm2")
	I1028 12:57:01.846221  129528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:57:01.846258  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:01.846600  129528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:57:01.846640  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.849685  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.850039  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.850089  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.850193  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.850367  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.850515  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.850669  129528 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:57:01.936454  129528 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:57:01.941405  129528 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:57:01.941433  129528 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 12:57:01.941532  129528 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 12:57:01.941655  129528 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 12:57:01.941794  129528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:57:01.951502  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:57:01.978191  129528 start.go:296] duration metric: took 131.967866ms for postStartSetup
	I1028 12:57:01.978240  129528 fix.go:56] duration metric: took 19.266169917s for fixHost
	I1028 12:57:01.978268  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:01.981099  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.981537  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:01.981577  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:01.981748  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:01.981964  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.982140  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:01.982270  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:01.982476  129528 main.go:141] libmachine: Using SSH client type: native
	I1028 12:57:01.982705  129528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1028 12:57:01.982718  129528 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:57:02.089131  129528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730120222.051172517
	
	I1028 12:57:02.089238  129528 fix.go:216] guest clock: 1730120222.051172517
	I1028 12:57:02.089256  129528 fix.go:229] Guest: 2024-10-28 12:57:02.051172517 +0000 UTC Remote: 2024-10-28 12:57:01.97824446 +0000 UTC m=+274.359391556 (delta=72.928057ms)
	I1028 12:57:02.089291  129528 fix.go:200] guest clock delta is within tolerance: 72.928057ms
	I1028 12:57:02.089298  129528 start.go:83] releasing machines lock for "old-k8s-version-733464", held for 19.377265384s
	I1028 12:57:02.089334  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:02.089584  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:57:02.092514  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.092896  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:02.092925  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.093200  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:02.093725  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:02.093897  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .DriverName
	I1028 12:57:02.094039  129528 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:57:02.094085  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:02.094103  129528 ssh_runner.go:195] Run: cat /version.json
	I1028 12:57:02.094130  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHHostname
	I1028 12:57:02.097611  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.098034  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:02.098061  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.098233  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.098301  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:02.098481  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:02.098695  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:02.098780  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:02.098814  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:02.098924  129528 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:57:02.099253  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHPort
	I1028 12:57:02.099376  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHKeyPath
	I1028 12:57:02.099542  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetSSHUsername
	I1028 12:57:02.099683  129528 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/old-k8s-version-733464/id_rsa Username:docker}
	I1028 12:57:02.181653  129528 ssh_runner.go:195] Run: systemctl --version
	I1028 12:57:02.208543  129528 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:57:02.367604  129528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:57:02.374838  129528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:57:02.374909  129528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:57:02.390129  129528 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:57:02.390161  129528 start.go:495] detecting cgroup driver to use...
	I1028 12:57:02.390268  129528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:57:02.408458  129528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:57:02.423907  129528 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:57:02.423978  129528 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:57:02.440450  129528 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:57:02.455734  129528 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:57:02.592226  129528 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:57:02.744574  129528 docker.go:233] disabling docker service ...
	I1028 12:57:02.744642  129528 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:57:02.762249  129528 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:57:02.777595  129528 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:57:02.927909  129528 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:57:03.066808  129528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:57:03.080232  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:57:03.099618  129528 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:57:03.099715  129528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:57:03.109594  129528 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:57:03.109673  129528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:57:03.121363  129528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:57:03.133668  129528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:57:03.145901  129528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:57:03.156674  129528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:57:03.170043  129528 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:57:03.170114  129528 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:57:03.183206  129528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:57:03.193132  129528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:57:03.345562  129528 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:57:03.447234  129528 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:57:03.447351  129528 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:57:03.453709  129528 start.go:563] Will wait 60s for crictl version
	I1028 12:57:03.453774  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:03.457368  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:57:03.505615  129528 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:57:03.505712  129528 ssh_runner.go:195] Run: crio --version
	I1028 12:57:03.540892  129528 ssh_runner.go:195] Run: crio --version
	I1028 12:57:03.571009  129528 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:57:03.572457  129528 main.go:141] libmachine: (old-k8s-version-733464) Calling .GetIP
	I1028 12:57:03.575643  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:03.576033  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:6c:2d", ip: ""} in network mk-old-k8s-version-733464: {Iface:virbr4 ExpiryTime:2024-10-28 13:56:53 +0000 UTC Type:0 Mac:52:54:00:cf:6c:2d Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:old-k8s-version-733464 Clientid:01:52:54:00:cf:6c:2d}
	I1028 12:57:03.576070  129528 main.go:141] libmachine: (old-k8s-version-733464) DBG | domain old-k8s-version-733464 has defined IP address 192.168.39.208 and MAC address 52:54:00:cf:6c:2d in network mk-old-k8s-version-733464
	I1028 12:57:03.576315  129528 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:57:03.580829  129528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:57:03.595769  129528 kubeadm.go:883] updating cluster {Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:57:03.595932  129528 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:57:03.596008  129528 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:57:03.652744  129528 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:57:03.652845  129528 ssh_runner.go:195] Run: which lz4
	I1028 12:57:03.657454  129528 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:57:03.661757  129528 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:57:03.661799  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:57:05.050824  129528 crio.go:462] duration metric: took 1.393416433s to copy over tarball
	I1028 12:57:05.050922  129528 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:57:08.211459  129528 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.160502848s)
	I1028 12:57:08.211489  129528 crio.go:469] duration metric: took 3.160630432s to extract the tarball
	I1028 12:57:08.211514  129528 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:57:08.272590  129528 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:57:08.314449  129528 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:57:08.314497  129528 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:57:08.314622  129528 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:08.314645  129528 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.314665  129528 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:57:08.314603  129528 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:08.314601  129528 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:57:08.314718  129528 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:57:08.314770  129528 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.314781  129528 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.316506  129528 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.316522  129528 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:57:08.316598  129528 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:57:08.316606  129528 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.316615  129528 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.316601  129528 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:08.316661  129528 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:08.316637  129528 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:57:08.489202  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.489258  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.491744  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.498192  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:57:08.502750  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:57:08.508919  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:08.522345  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:08.667443  129528 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:57:08.667520  129528 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.667576  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.667823  129528 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:57:08.668024  129528 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.668099  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.677334  129528 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:57:08.677382  129528 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.677402  129528 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:57:08.677429  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.677441  129528 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:57:08.677492  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.691122  129528 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:57:08.691167  129528 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:57:08.691173  129528 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:57:08.691205  129528 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:08.691227  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.691230  129528 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:57:08.691255  129528 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:08.691268  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.691285  129528 ssh_runner.go:195] Run: which crictl
	I1028 12:57:08.691322  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.691397  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.691422  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.691478  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:57:08.802738  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:08.802779  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:57:08.802826  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:08.802924  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:08.802949  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:08.811976  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:08.812563  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:57:09.007005  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:09.007054  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:57:09.007153  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:57:09.007261  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:57:09.007352  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:09.130831  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:57:09.130867  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:57:09.130943  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:57:09.130948  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:57:09.131057  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:57:09.156796  129528 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:57:09.157018  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:57:09.270938  129528 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:57:09.287664  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:57:09.287696  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:57:09.289915  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:57:09.290000  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:57:09.290051  129528 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:57:09.414870  129528 cache_images.go:92] duration metric: took 1.100340465s to LoadCachedImages
	W1028 12:57:09.414989  129528 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19875-77800/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1028 12:57:09.415009  129528 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.20.0 crio true true} ...
	I1028 12:57:09.415210  129528 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-733464 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:57:09.415301  129528 ssh_runner.go:195] Run: crio config
	I1028 12:57:09.480980  129528 cni.go:84] Creating CNI manager for ""
	I1028 12:57:09.481012  129528 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:57:09.481024  129528 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:57:09.481043  129528 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-733464 NodeName:old-k8s-version-733464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:57:09.481176  129528 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-733464"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:57:09.481245  129528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:57:09.492466  129528 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:57:09.492540  129528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:57:09.502933  129528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:57:09.518985  129528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:57:09.535069  129528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:57:09.553284  129528 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I1028 12:57:09.557825  129528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:57:09.572254  129528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:57:09.699823  129528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:57:09.717811  129528 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464 for IP: 192.168.39.208
	I1028 12:57:09.717839  129528 certs.go:194] generating shared ca certs ...
	I1028 12:57:09.717862  129528 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:57:09.718086  129528 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 12:57:09.718153  129528 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 12:57:09.718168  129528 certs.go:256] generating profile certs ...
	I1028 12:57:09.718325  129528 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.key
	I1028 12:57:09.718419  129528 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key.56bd5639
	I1028 12:57:09.718476  129528 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key
	I1028 12:57:09.718664  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 12:57:09.718712  129528 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 12:57:09.718727  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 12:57:09.718762  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 12:57:09.718807  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:57:09.718850  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 12:57:09.718916  129528 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 12:57:09.719992  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:57:09.752177  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:57:09.775346  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:57:09.797519  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:57:09.820225  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:57:09.854087  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:57:09.888370  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:57:09.922459  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:57:09.957271  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 12:57:09.979958  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 12:57:10.003439  129528 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:57:10.025685  129528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:57:10.041435  129528 ssh_runner.go:195] Run: openssl version
	I1028 12:57:10.048422  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 12:57:10.061653  129528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 12:57:10.065986  129528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 12:57:10.066036  129528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 12:57:10.072222  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 12:57:10.083674  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 12:57:10.094731  129528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 12:57:10.098909  129528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 12:57:10.098960  129528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 12:57:10.104404  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:57:10.115465  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:57:10.126983  129528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:57:10.131129  129528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:57:10.131183  129528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:57:10.136400  129528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:57:10.147559  129528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:57:10.151644  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:57:10.157150  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:57:10.162668  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:57:10.169943  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:57:10.175873  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:57:10.181771  129528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:57:10.187488  129528 kubeadm.go:392] StartCluster: {Name:old-k8s-version-733464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-733464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:57:10.187597  129528 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:57:10.187672  129528 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:57:10.224477  129528 cri.go:89] found id: ""
	I1028 12:57:10.224553  129528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:57:10.234555  129528 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:57:10.234582  129528 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:57:10.234635  129528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:57:10.244071  129528 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:57:10.245082  129528 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-733464" does not appear in /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:57:10.245758  129528 kubeconfig.go:62] /home/jenkins/minikube-integration/19875-77800/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-733464" cluster setting kubeconfig missing "old-k8s-version-733464" context setting]
	I1028 12:57:10.246698  129528 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:57:10.312353  129528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:57:10.322516  129528 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.208
	I1028 12:57:10.322556  129528 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:57:10.322570  129528 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:57:10.322618  129528 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:57:10.361061  129528 cri.go:89] found id: ""
	I1028 12:57:10.361144  129528 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:57:10.381138  129528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:57:10.392288  129528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:57:10.392314  129528 kubeadm.go:157] found existing configuration files:
	
	I1028 12:57:10.392375  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:57:10.402530  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:57:10.402595  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:57:10.413038  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:57:10.422900  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:57:10.422966  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:57:10.433521  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:57:10.442897  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:57:10.442953  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:57:10.452639  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:57:10.464355  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:57:10.464435  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:57:10.476693  129528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:57:10.486408  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:57:10.624982  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:57:11.548050  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:57:11.784469  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:57:11.882731  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:57:11.976648  129528 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:57:11.976804  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:12.477072  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:12.977543  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:13.477410  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:13.977885  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:14.477329  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:14.977685  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:15.476977  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:15.976845  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:16.477820  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:16.976918  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:17.477495  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:17.977214  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:18.476856  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:18.977601  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:19.477262  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:19.977095  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:20.477150  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:20.977553  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:21.477616  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:21.977441  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:22.477444  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:22.977116  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:23.477482  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:23.977020  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:24.476895  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:24.977558  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:25.476925  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:25.976887  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:26.477232  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:26.977763  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:27.477434  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:27.977776  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:28.476921  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:28.977114  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:29.476853  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:29.977475  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:30.477555  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:30.977859  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:31.477332  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:31.977215  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:32.476957  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:32.977165  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:33.477487  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:33.976923  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:34.477318  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:34.977818  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:35.476842  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:35.977222  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:36.477674  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:36.977822  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:37.477366  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:37.977243  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:38.476826  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:38.977129  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:39.476981  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:39.977135  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:40.477218  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:40.977507  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:41.477105  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:41.977631  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:42.476932  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:42.977328  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:43.477623  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:43.977640  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:44.477569  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:44.977063  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:45.477860  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:45.977497  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:46.477328  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:46.977283  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:47.477751  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:47.977653  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:48.477764  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:48.977714  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:49.477856  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:49.977303  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:50.477489  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:50.976923  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:51.477075  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:51.977589  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:52.477776  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:52.977442  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:53.477533  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:53.977211  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:54.477074  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:54.977518  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:55.477547  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:55.977551  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:56.477627  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:56.977615  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:57.477800  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:57.977084  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:58.477741  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:58.977541  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:59.477428  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:57:59.976959  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:00.477828  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:00.976840  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:01.477740  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:01.976805  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:02.477031  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:02.977167  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:03.477008  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:03.977519  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:04.477681  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:04.977548  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:05.477294  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:05.976852  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:06.477658  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:06.977795  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:07.477166  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:07.977811  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:08.477037  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:08.977583  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:09.476967  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:09.977216  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:10.477338  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:10.977059  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:11.477708  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:11.976941  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:11.977037  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:12.024369  129528 cri.go:89] found id: ""
	I1028 12:58:12.024406  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.024418  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:12.024426  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:12.024510  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:12.060185  129528 cri.go:89] found id: ""
	I1028 12:58:12.060212  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.060219  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:12.060226  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:12.060289  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:12.091530  129528 cri.go:89] found id: ""
	I1028 12:58:12.091566  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.091577  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:12.091584  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:12.091653  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:12.126277  129528 cri.go:89] found id: ""
	I1028 12:58:12.126311  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.126322  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:12.126332  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:12.126393  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:12.161666  129528 cri.go:89] found id: ""
	I1028 12:58:12.161690  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.161697  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:12.161704  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:12.161768  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:12.193951  129528 cri.go:89] found id: ""
	I1028 12:58:12.193976  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.193986  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:12.193994  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:12.194053  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:12.233792  129528 cri.go:89] found id: ""
	I1028 12:58:12.233827  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.233837  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:12.233844  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:12.233903  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:12.265506  129528 cri.go:89] found id: ""
	I1028 12:58:12.265548  129528 logs.go:282] 0 containers: []
	W1028 12:58:12.265561  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:12.265574  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:12.265589  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:12.302204  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:12.302233  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:12.354695  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:12.354735  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:12.367461  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:12.367496  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:12.482243  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:12.482271  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:12.482289  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:15.049975  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:15.063187  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:15.063258  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:15.096696  129528 cri.go:89] found id: ""
	I1028 12:58:15.096721  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.096730  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:15.096737  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:15.096791  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:15.129685  129528 cri.go:89] found id: ""
	I1028 12:58:15.129716  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.129729  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:15.129737  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:15.129799  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:15.161429  129528 cri.go:89] found id: ""
	I1028 12:58:15.161463  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.161474  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:15.161481  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:15.161541  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:15.193221  129528 cri.go:89] found id: ""
	I1028 12:58:15.193252  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.193262  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:15.193270  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:15.193331  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:15.224920  129528 cri.go:89] found id: ""
	I1028 12:58:15.224945  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.224953  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:15.224959  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:15.225008  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:15.255900  129528 cri.go:89] found id: ""
	I1028 12:58:15.255925  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.255935  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:15.255942  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:15.255994  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:15.286709  129528 cri.go:89] found id: ""
	I1028 12:58:15.286736  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.286744  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:15.286751  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:15.286804  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:15.318121  129528 cri.go:89] found id: ""
	I1028 12:58:15.318147  129528 logs.go:282] 0 containers: []
	W1028 12:58:15.318157  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:15.318168  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:15.318180  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:15.364655  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:15.364689  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:15.376796  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:15.376822  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:15.451128  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:15.451163  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:15.451180  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:15.524192  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:15.524231  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:18.062026  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:18.074207  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:18.074266  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:18.105547  129528 cri.go:89] found id: ""
	I1028 12:58:18.105578  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.105589  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:18.105598  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:18.105668  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:18.135219  129528 cri.go:89] found id: ""
	I1028 12:58:18.135246  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.135258  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:18.135265  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:18.135317  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:18.173050  129528 cri.go:89] found id: ""
	I1028 12:58:18.173074  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.173081  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:18.173088  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:18.173138  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:18.204336  129528 cri.go:89] found id: ""
	I1028 12:58:18.204362  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.204370  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:18.204377  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:18.204422  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:18.232558  129528 cri.go:89] found id: ""
	I1028 12:58:18.232582  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.232593  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:18.232601  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:18.232681  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:18.262332  129528 cri.go:89] found id: ""
	I1028 12:58:18.262354  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.262363  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:18.262369  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:18.262419  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:18.293778  129528 cri.go:89] found id: ""
	I1028 12:58:18.293801  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.293808  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:18.293814  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:18.293881  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:18.325041  129528 cri.go:89] found id: ""
	I1028 12:58:18.325067  129528 logs.go:282] 0 containers: []
	W1028 12:58:18.325074  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:18.325085  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:18.325102  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:18.377373  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:18.377409  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:18.391777  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:18.391805  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:18.463122  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:18.463146  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:18.463162  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:18.543507  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:18.543543  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:21.078009  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:21.090602  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:21.090665  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:21.126739  129528 cri.go:89] found id: ""
	I1028 12:58:21.126768  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.126776  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:21.126784  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:21.126853  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:21.159365  129528 cri.go:89] found id: ""
	I1028 12:58:21.159391  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.159399  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:21.159405  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:21.159458  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:21.193578  129528 cri.go:89] found id: ""
	I1028 12:58:21.193610  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.193623  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:21.193639  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:21.193706  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:21.225483  129528 cri.go:89] found id: ""
	I1028 12:58:21.225515  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.225526  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:21.225535  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:21.225601  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:21.259390  129528 cri.go:89] found id: ""
	I1028 12:58:21.259425  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.259436  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:21.259445  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:21.259508  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:21.296399  129528 cri.go:89] found id: ""
	I1028 12:58:21.296431  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.296441  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:21.296449  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:21.296509  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:21.341588  129528 cri.go:89] found id: ""
	I1028 12:58:21.341615  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.341624  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:21.341648  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:21.341716  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:21.382040  129528 cri.go:89] found id: ""
	I1028 12:58:21.382074  129528 logs.go:282] 0 containers: []
	W1028 12:58:21.382087  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:21.382098  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:21.382111  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:21.472623  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:21.472647  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:21.472662  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:21.552297  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:21.552338  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:21.594362  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:21.594396  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:21.642023  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:21.642063  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:24.156605  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:24.169023  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:24.169101  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:24.206212  129528 cri.go:89] found id: ""
	I1028 12:58:24.206240  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.206251  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:24.206259  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:24.206322  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:24.241876  129528 cri.go:89] found id: ""
	I1028 12:58:24.241899  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.241906  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:24.241912  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:24.241959  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:24.275931  129528 cri.go:89] found id: ""
	I1028 12:58:24.275962  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.275971  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:24.275977  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:24.276039  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:24.309892  129528 cri.go:89] found id: ""
	I1028 12:58:24.309924  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.309932  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:24.309942  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:24.310006  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:24.339370  129528 cri.go:89] found id: ""
	I1028 12:58:24.339404  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.339414  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:24.339425  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:24.339478  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:24.370315  129528 cri.go:89] found id: ""
	I1028 12:58:24.370356  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.370368  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:24.370376  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:24.370432  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:24.402536  129528 cri.go:89] found id: ""
	I1028 12:58:24.402570  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.402582  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:24.402590  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:24.402656  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:24.437589  129528 cri.go:89] found id: ""
	I1028 12:58:24.437615  129528 logs.go:282] 0 containers: []
	W1028 12:58:24.437625  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:24.437637  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:24.437665  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:24.474251  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:24.474282  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:24.522947  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:24.522985  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:24.535848  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:24.535879  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:24.600417  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:24.600443  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:24.600461  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:27.174450  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:27.188872  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:27.188939  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:27.222559  129528 cri.go:89] found id: ""
	I1028 12:58:27.222589  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.222602  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:27.222611  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:27.222679  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:27.255655  129528 cri.go:89] found id: ""
	I1028 12:58:27.255686  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.255697  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:27.255705  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:27.255767  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:27.285056  129528 cri.go:89] found id: ""
	I1028 12:58:27.285087  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.285098  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:27.285107  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:27.285171  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:27.318953  129528 cri.go:89] found id: ""
	I1028 12:58:27.318978  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.318986  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:27.318992  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:27.319040  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:27.348274  129528 cri.go:89] found id: ""
	I1028 12:58:27.348299  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.348321  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:27.348328  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:27.348382  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:27.382443  129528 cri.go:89] found id: ""
	I1028 12:58:27.382468  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.382477  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:27.382486  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:27.382549  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:27.412658  129528 cri.go:89] found id: ""
	I1028 12:58:27.412684  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.412692  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:27.412709  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:27.412784  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:27.443107  129528 cri.go:89] found id: ""
	I1028 12:58:27.443139  129528 logs.go:282] 0 containers: []
	W1028 12:58:27.443149  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:27.443158  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:27.443169  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:27.517554  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:27.517592  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:27.553275  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:27.553311  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:27.604600  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:27.604644  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:27.617715  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:27.617755  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:27.691379  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:30.191668  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:30.203673  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:30.203737  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:30.234177  129528 cri.go:89] found id: ""
	I1028 12:58:30.234204  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.234212  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:30.234219  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:30.234275  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:30.264763  129528 cri.go:89] found id: ""
	I1028 12:58:30.264802  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.264815  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:30.264824  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:30.264884  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:30.297689  129528 cri.go:89] found id: ""
	I1028 12:58:30.297720  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.297732  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:30.297740  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:30.297815  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:30.328016  129528 cri.go:89] found id: ""
	I1028 12:58:30.328044  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.328052  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:30.328059  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:30.328114  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:30.359274  129528 cri.go:89] found id: ""
	I1028 12:58:30.359306  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.359315  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:30.359322  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:30.359390  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:30.395192  129528 cri.go:89] found id: ""
	I1028 12:58:30.395222  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.395234  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:30.395244  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:30.395305  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:30.427818  129528 cri.go:89] found id: ""
	I1028 12:58:30.427847  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.427864  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:30.427873  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:30.427932  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:30.462464  129528 cri.go:89] found id: ""
	I1028 12:58:30.462499  129528 logs.go:282] 0 containers: []
	W1028 12:58:30.462511  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:30.462522  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:30.462534  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:30.513662  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:30.513699  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:30.527579  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:30.527613  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:30.594147  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:30.594172  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:30.594185  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:30.676694  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:30.676730  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:33.215529  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:33.227533  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:33.227595  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:33.258151  129528 cri.go:89] found id: ""
	I1028 12:58:33.258186  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.258198  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:33.258208  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:33.258273  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:33.290651  129528 cri.go:89] found id: ""
	I1028 12:58:33.290686  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.290697  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:33.290705  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:33.290842  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:33.321238  129528 cri.go:89] found id: ""
	I1028 12:58:33.321262  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.321271  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:33.321277  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:33.321327  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:33.351122  129528 cri.go:89] found id: ""
	I1028 12:58:33.351161  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.351173  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:33.351180  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:33.351254  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:33.381448  129528 cri.go:89] found id: ""
	I1028 12:58:33.381485  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.381495  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:33.381503  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:33.381580  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:33.415603  129528 cri.go:89] found id: ""
	I1028 12:58:33.415656  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.415669  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:33.415681  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:33.415750  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:33.451361  129528 cri.go:89] found id: ""
	I1028 12:58:33.451394  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.451404  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:33.451412  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:33.451478  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:33.483301  129528 cri.go:89] found id: ""
	I1028 12:58:33.483331  129528 logs.go:282] 0 containers: []
	W1028 12:58:33.483343  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:33.483356  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:33.483371  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:33.533566  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:33.533609  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:33.545717  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:33.545743  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:33.607644  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:33.607683  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:33.607711  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:33.686574  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:33.686609  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:36.234061  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:36.248404  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:36.248488  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:36.281636  129528 cri.go:89] found id: ""
	I1028 12:58:36.281676  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.281688  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:36.281698  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:36.281762  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:36.313491  129528 cri.go:89] found id: ""
	I1028 12:58:36.313523  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.313534  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:36.313545  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:36.313622  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:36.345551  129528 cri.go:89] found id: ""
	I1028 12:58:36.345588  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.345599  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:36.345608  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:36.345666  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:36.377283  129528 cri.go:89] found id: ""
	I1028 12:58:36.377310  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.377321  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:36.377329  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:36.377396  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:36.410341  129528 cri.go:89] found id: ""
	I1028 12:58:36.410377  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.410389  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:36.410398  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:36.410460  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:36.444957  129528 cri.go:89] found id: ""
	I1028 12:58:36.444985  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.444996  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:36.445005  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:36.445069  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:36.479421  129528 cri.go:89] found id: ""
	I1028 12:58:36.479448  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.479456  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:36.479464  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:36.479529  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:36.515323  129528 cri.go:89] found id: ""
	I1028 12:58:36.515352  129528 logs.go:282] 0 containers: []
	W1028 12:58:36.515360  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:36.515369  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:36.515382  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:36.592036  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:36.592075  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:36.636494  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:36.636539  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:36.686285  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:36.686318  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:36.699451  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:36.699481  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:36.769918  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:39.270082  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:39.282368  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:39.282428  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:39.322713  129528 cri.go:89] found id: ""
	I1028 12:58:39.322741  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.322749  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:39.322755  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:39.322805  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:39.352890  129528 cri.go:89] found id: ""
	I1028 12:58:39.352922  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.352933  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:39.352941  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:39.353007  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:39.383192  129528 cri.go:89] found id: ""
	I1028 12:58:39.383221  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.383232  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:39.383243  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:39.383293  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:39.414068  129528 cri.go:89] found id: ""
	I1028 12:58:39.414098  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.414109  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:39.414118  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:39.414190  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:39.445993  129528 cri.go:89] found id: ""
	I1028 12:58:39.446024  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.446035  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:39.446044  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:39.446100  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:39.476660  129528 cri.go:89] found id: ""
	I1028 12:58:39.476691  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.476703  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:39.476710  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:39.476769  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:39.507327  129528 cri.go:89] found id: ""
	I1028 12:58:39.507364  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.507376  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:39.507385  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:39.507446  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:39.537433  129528 cri.go:89] found id: ""
	I1028 12:58:39.537464  129528 logs.go:282] 0 containers: []
	W1028 12:58:39.537474  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:39.537486  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:39.537503  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:39.583884  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:39.583930  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:39.596707  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:39.596744  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:39.665559  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:39.665586  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:39.665603  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:39.740745  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:39.740783  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:42.275215  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:42.287578  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:42.287652  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:42.320134  129528 cri.go:89] found id: ""
	I1028 12:58:42.320164  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.320173  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:42.320181  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:42.320242  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:42.353795  129528 cri.go:89] found id: ""
	I1028 12:58:42.353829  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.353841  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:42.353849  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:42.353919  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:42.384549  129528 cri.go:89] found id: ""
	I1028 12:58:42.384582  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.384593  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:42.384601  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:42.384669  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:42.417478  129528 cri.go:89] found id: ""
	I1028 12:58:42.417506  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.417514  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:42.417521  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:42.417581  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:42.448527  129528 cri.go:89] found id: ""
	I1028 12:58:42.448566  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.448574  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:42.448581  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:42.448634  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:42.480432  129528 cri.go:89] found id: ""
	I1028 12:58:42.480464  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.480475  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:42.480484  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:42.480571  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:42.516206  129528 cri.go:89] found id: ""
	I1028 12:58:42.516236  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.516251  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:42.516258  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:42.516319  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:42.550999  129528 cri.go:89] found id: ""
	I1028 12:58:42.551029  129528 logs.go:282] 0 containers: []
	W1028 12:58:42.551038  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:42.551049  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:42.551069  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:42.616043  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:42.616068  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:42.616082  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:42.701422  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:42.701460  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:42.737946  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:42.737979  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:42.784453  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:42.784488  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:45.298231  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:45.310013  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:45.310077  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:45.340835  129528 cri.go:89] found id: ""
	I1028 12:58:45.340860  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.340868  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:45.340875  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:45.340929  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:45.371700  129528 cri.go:89] found id: ""
	I1028 12:58:45.371739  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.371752  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:45.371762  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:45.371841  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:45.405001  129528 cri.go:89] found id: ""
	I1028 12:58:45.405033  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.405043  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:45.405052  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:45.405112  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:45.436503  129528 cri.go:89] found id: ""
	I1028 12:58:45.436529  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.436538  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:45.436545  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:45.436608  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:45.466765  129528 cri.go:89] found id: ""
	I1028 12:58:45.466793  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.466804  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:45.466813  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:45.466884  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:45.498197  129528 cri.go:89] found id: ""
	I1028 12:58:45.498225  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.498235  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:45.498244  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:45.498302  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:45.528783  129528 cri.go:89] found id: ""
	I1028 12:58:45.528809  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.528819  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:45.528829  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:45.528885  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:45.558755  129528 cri.go:89] found id: ""
	I1028 12:58:45.558781  129528 logs.go:282] 0 containers: []
	W1028 12:58:45.558791  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:45.558803  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:45.558831  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:45.591850  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:45.591889  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:45.642190  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:45.642232  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:45.655769  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:45.655805  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:45.719951  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:45.719988  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:45.720008  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:48.294608  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:48.307616  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:48.307691  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:48.340268  129528 cri.go:89] found id: ""
	I1028 12:58:48.340294  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.340302  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:48.340308  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:48.340359  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:48.368840  129528 cri.go:89] found id: ""
	I1028 12:58:48.368869  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.368876  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:48.368882  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:48.368930  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:48.401109  129528 cri.go:89] found id: ""
	I1028 12:58:48.401135  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.401143  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:48.401150  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:48.401210  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:48.433156  129528 cri.go:89] found id: ""
	I1028 12:58:48.433193  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.433205  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:48.433213  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:48.433286  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:48.463355  129528 cri.go:89] found id: ""
	I1028 12:58:48.463384  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.463392  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:48.463400  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:48.463463  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:48.494356  129528 cri.go:89] found id: ""
	I1028 12:58:48.494383  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.494391  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:48.494399  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:48.494461  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:48.525300  129528 cri.go:89] found id: ""
	I1028 12:58:48.525330  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.525338  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:48.525349  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:48.525412  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:48.557096  129528 cri.go:89] found id: ""
	I1028 12:58:48.557127  129528 logs.go:282] 0 containers: []
	W1028 12:58:48.557136  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:48.557147  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:48.557164  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:48.593589  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:48.593624  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:48.645413  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:48.645449  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:48.658919  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:48.658963  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:48.725270  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:48.725292  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:48.725305  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:51.304272  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:51.318104  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:51.318179  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:51.353476  129528 cri.go:89] found id: ""
	I1028 12:58:51.353510  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.353523  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:51.353531  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:51.353597  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:51.390233  129528 cri.go:89] found id: ""
	I1028 12:58:51.390260  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.390268  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:51.390275  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:51.390333  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:51.425153  129528 cri.go:89] found id: ""
	I1028 12:58:51.425185  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.425196  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:51.425204  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:51.425285  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:51.457050  129528 cri.go:89] found id: ""
	I1028 12:58:51.457077  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.457086  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:51.457092  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:51.457145  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:51.490871  129528 cri.go:89] found id: ""
	I1028 12:58:51.490911  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.490924  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:51.490932  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:51.490992  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:51.524279  129528 cri.go:89] found id: ""
	I1028 12:58:51.524305  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.524314  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:51.524321  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:51.524375  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:51.557508  129528 cri.go:89] found id: ""
	I1028 12:58:51.557538  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.557550  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:51.557557  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:51.557613  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:51.591142  129528 cri.go:89] found id: ""
	I1028 12:58:51.591178  129528 logs.go:282] 0 containers: []
	W1028 12:58:51.591186  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:51.591196  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:51.591210  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:51.640860  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:51.640901  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:51.653652  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:51.653684  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:51.725571  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:51.725598  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:51.725615  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:51.805309  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:51.805357  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:54.347082  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:54.359549  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:54.359620  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:54.392719  129528 cri.go:89] found id: ""
	I1028 12:58:54.392748  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.392758  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:54.392765  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:54.392831  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:54.426682  129528 cri.go:89] found id: ""
	I1028 12:58:54.426715  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.426726  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:54.426734  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:54.426802  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:54.458355  129528 cri.go:89] found id: ""
	I1028 12:58:54.458387  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.458399  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:54.458414  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:54.458481  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:54.489695  129528 cri.go:89] found id: ""
	I1028 12:58:54.489741  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.489753  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:54.489761  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:54.489831  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:54.520368  129528 cri.go:89] found id: ""
	I1028 12:58:54.520398  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.520410  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:54.520420  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:54.520483  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:54.551367  129528 cri.go:89] found id: ""
	I1028 12:58:54.551394  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.551402  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:54.551409  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:54.551460  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:54.584498  129528 cri.go:89] found id: ""
	I1028 12:58:54.584531  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.584540  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:54.584546  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:54.584610  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:54.620288  129528 cri.go:89] found id: ""
	I1028 12:58:54.620322  129528 logs.go:282] 0 containers: []
	W1028 12:58:54.620333  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:54.620351  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:54.620367  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:54.654493  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:54.654528  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:54.705146  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:54.705179  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:54.717407  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:54.717436  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:54.782858  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:58:54.782882  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:54.782895  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:57.359156  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:58:57.371448  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:58:57.371513  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:58:57.404089  129528 cri.go:89] found id: ""
	I1028 12:58:57.404114  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.404127  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:58:57.404134  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:58:57.404180  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:58:57.433734  129528 cri.go:89] found id: ""
	I1028 12:58:57.433767  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.433778  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:58:57.433787  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:58:57.433860  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:58:57.463516  129528 cri.go:89] found id: ""
	I1028 12:58:57.463549  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.463611  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:58:57.463648  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:58:57.463713  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:58:57.497654  129528 cri.go:89] found id: ""
	I1028 12:58:57.497684  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.497695  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:58:57.497704  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:58:57.497772  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:58:57.533113  129528 cri.go:89] found id: ""
	I1028 12:58:57.533144  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.533180  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:58:57.533188  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:58:57.533256  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:58:57.563568  129528 cri.go:89] found id: ""
	I1028 12:58:57.563593  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.563602  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:58:57.563608  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:58:57.563686  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:58:57.593885  129528 cri.go:89] found id: ""
	I1028 12:58:57.593921  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.593932  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:58:57.593942  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:58:57.594008  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:58:57.629033  129528 cri.go:89] found id: ""
	I1028 12:58:57.629064  129528 logs.go:282] 0 containers: []
	W1028 12:58:57.629073  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:58:57.629082  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:58:57.629095  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:58:57.708498  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:58:57.708581  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:58:57.752435  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:58:57.752477  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:58:57.805044  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:58:57.805081  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:58:57.817871  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:58:57.817905  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:58:57.881056  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:00.381282  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:00.394195  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:00.394259  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:00.425923  129528 cri.go:89] found id: ""
	I1028 12:59:00.425952  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.425962  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:00.425975  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:00.426036  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:00.455934  129528 cri.go:89] found id: ""
	I1028 12:59:00.455964  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.455974  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:00.455981  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:00.456033  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:00.486731  129528 cri.go:89] found id: ""
	I1028 12:59:00.486758  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.486765  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:00.486772  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:00.486823  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:00.520054  129528 cri.go:89] found id: ""
	I1028 12:59:00.520079  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.520087  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:00.520094  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:00.520144  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:00.550268  129528 cri.go:89] found id: ""
	I1028 12:59:00.550301  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.550312  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:00.550323  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:00.550419  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:00.580050  129528 cri.go:89] found id: ""
	I1028 12:59:00.580083  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.580093  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:00.580101  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:00.580160  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:00.621105  129528 cri.go:89] found id: ""
	I1028 12:59:00.621141  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.621153  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:00.621161  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:00.621226  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:00.653050  129528 cri.go:89] found id: ""
	I1028 12:59:00.653081  129528 logs.go:282] 0 containers: []
	W1028 12:59:00.653088  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:00.653099  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:00.653111  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:00.704947  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:00.704978  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:00.718093  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:00.718118  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:00.786415  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:00.786447  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:00.786465  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:00.864428  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:00.864466  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:03.402643  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:03.414755  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:03.414842  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:03.447888  129528 cri.go:89] found id: ""
	I1028 12:59:03.447920  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.447936  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:03.447945  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:03.448012  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:03.480724  129528 cri.go:89] found id: ""
	I1028 12:59:03.480750  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.480757  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:03.480763  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:03.480813  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:03.512281  129528 cri.go:89] found id: ""
	I1028 12:59:03.512320  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.512333  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:03.512343  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:03.512414  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:03.543673  129528 cri.go:89] found id: ""
	I1028 12:59:03.543705  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.543717  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:03.543727  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:03.543804  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:03.574245  129528 cri.go:89] found id: ""
	I1028 12:59:03.574277  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.574286  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:03.574292  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:03.574356  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:03.607483  129528 cri.go:89] found id: ""
	I1028 12:59:03.607519  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.607533  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:03.607542  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:03.607614  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:03.638756  129528 cri.go:89] found id: ""
	I1028 12:59:03.638784  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.638793  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:03.638801  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:03.638866  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:03.669283  129528 cri.go:89] found id: ""
	I1028 12:59:03.669313  129528 logs.go:282] 0 containers: []
	W1028 12:59:03.669323  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:03.669333  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:03.669346  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:03.720268  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:03.720306  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:03.735103  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:03.735130  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:03.807298  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:03.807321  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:03.807342  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:03.883386  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:03.883423  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:06.420026  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:06.433442  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:06.433508  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:06.466778  129528 cri.go:89] found id: ""
	I1028 12:59:06.466811  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.466823  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:06.466831  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:06.466896  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:06.499540  129528 cri.go:89] found id: ""
	I1028 12:59:06.499571  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.499580  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:06.499586  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:06.499673  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:06.531678  129528 cri.go:89] found id: ""
	I1028 12:59:06.531716  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.531728  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:06.531737  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:06.531801  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:06.562983  129528 cri.go:89] found id: ""
	I1028 12:59:06.563013  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.563024  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:06.563033  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:06.563095  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:06.597604  129528 cri.go:89] found id: ""
	I1028 12:59:06.597637  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.597653  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:06.597663  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:06.597732  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:06.631437  129528 cri.go:89] found id: ""
	I1028 12:59:06.631470  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.631484  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:06.631494  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:06.631588  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:06.667701  129528 cri.go:89] found id: ""
	I1028 12:59:06.667730  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.667740  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:06.667747  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:06.667819  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:06.717566  129528 cri.go:89] found id: ""
	I1028 12:59:06.717598  129528 logs.go:282] 0 containers: []
	W1028 12:59:06.717607  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:06.717616  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:06.717628  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:06.767715  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:06.767756  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:06.780247  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:06.780274  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:06.847140  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:06.847163  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:06.847176  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:06.925009  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:06.925057  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:09.464361  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:09.476387  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:09.476470  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:09.507138  129528 cri.go:89] found id: ""
	I1028 12:59:09.507172  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.507184  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:09.507191  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:09.507252  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:09.538059  129528 cri.go:89] found id: ""
	I1028 12:59:09.538092  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.538104  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:09.538112  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:09.538165  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:09.567311  129528 cri.go:89] found id: ""
	I1028 12:59:09.567340  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.567351  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:09.567359  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:09.567425  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:09.599225  129528 cri.go:89] found id: ""
	I1028 12:59:09.599252  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.599269  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:09.599276  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:09.599355  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:09.630412  129528 cri.go:89] found id: ""
	I1028 12:59:09.630443  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.630455  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:09.630464  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:09.630534  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:09.663614  129528 cri.go:89] found id: ""
	I1028 12:59:09.663646  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.663656  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:09.663665  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:09.663723  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:09.700438  129528 cri.go:89] found id: ""
	I1028 12:59:09.700462  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.700469  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:09.700475  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:09.700538  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:09.734911  129528 cri.go:89] found id: ""
	I1028 12:59:09.734941  129528 logs.go:282] 0 containers: []
	W1028 12:59:09.734952  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:09.734964  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:09.734981  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:09.783689  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:09.783725  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:09.796297  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:09.796325  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:09.860255  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:09.860290  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:09.860308  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:09.935306  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:09.935351  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:12.471996  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:12.483734  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:12.483809  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:12.514761  129528 cri.go:89] found id: ""
	I1028 12:59:12.514793  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.514805  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:12.514812  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:12.514863  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:12.544752  129528 cri.go:89] found id: ""
	I1028 12:59:12.544779  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.544790  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:12.544800  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:12.544877  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:12.575827  129528 cri.go:89] found id: ""
	I1028 12:59:12.575858  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.575868  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:12.575876  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:12.575947  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:12.610738  129528 cri.go:89] found id: ""
	I1028 12:59:12.610765  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.610775  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:12.610784  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:12.610853  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:12.642198  129528 cri.go:89] found id: ""
	I1028 12:59:12.642223  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.642232  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:12.642239  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:12.642293  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:12.674882  129528 cri.go:89] found id: ""
	I1028 12:59:12.674914  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.674926  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:12.674935  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:12.675003  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:12.708939  129528 cri.go:89] found id: ""
	I1028 12:59:12.708968  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.708976  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:12.708983  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:12.709034  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:12.740620  129528 cri.go:89] found id: ""
	I1028 12:59:12.740644  129528 logs.go:282] 0 containers: []
	W1028 12:59:12.740652  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:12.740661  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:12.740672  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:12.815372  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:12.815411  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:12.851893  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:12.851935  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:12.901818  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:12.901862  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:12.915062  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:12.915099  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:12.978545  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:15.479613  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:15.493033  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:15.493126  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:15.528683  129528 cri.go:89] found id: ""
	I1028 12:59:15.528714  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.528722  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:15.528729  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:15.528787  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:15.564633  129528 cri.go:89] found id: ""
	I1028 12:59:15.564658  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.564666  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:15.564673  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:15.564729  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:15.597911  129528 cri.go:89] found id: ""
	I1028 12:59:15.597946  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.597957  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:15.597972  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:15.598028  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:15.629355  129528 cri.go:89] found id: ""
	I1028 12:59:15.629381  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.629391  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:15.629399  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:15.629462  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:15.658743  129528 cri.go:89] found id: ""
	I1028 12:59:15.658772  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.658790  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:15.658799  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:15.658872  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:15.689280  129528 cri.go:89] found id: ""
	I1028 12:59:15.689311  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.689323  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:15.689332  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:15.689394  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:15.719658  129528 cri.go:89] found id: ""
	I1028 12:59:15.719686  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.719696  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:15.719705  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:15.719771  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:15.752567  129528 cri.go:89] found id: ""
	I1028 12:59:15.752602  129528 logs.go:282] 0 containers: []
	W1028 12:59:15.752613  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:15.752625  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:15.752642  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:15.812940  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:15.812964  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:15.812984  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:15.888106  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:15.888145  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:15.924376  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:15.924406  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:15.973596  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:15.973634  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:18.487094  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:18.501079  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:18.501152  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:18.538214  129528 cri.go:89] found id: ""
	I1028 12:59:18.538247  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.538256  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:18.538262  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:18.538315  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:18.568477  129528 cri.go:89] found id: ""
	I1028 12:59:18.568506  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.568519  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:18.568527  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:18.568592  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:18.599372  129528 cri.go:89] found id: ""
	I1028 12:59:18.599397  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.599406  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:18.599412  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:18.599472  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:18.629566  129528 cri.go:89] found id: ""
	I1028 12:59:18.629596  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.629605  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:18.629611  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:18.629664  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:18.661818  129528 cri.go:89] found id: ""
	I1028 12:59:18.661844  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.661852  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:18.661858  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:18.661917  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:18.692857  129528 cri.go:89] found id: ""
	I1028 12:59:18.692885  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.692893  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:18.692903  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:18.692973  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:18.725633  129528 cri.go:89] found id: ""
	I1028 12:59:18.725658  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.725666  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:18.725673  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:18.725723  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:18.756226  129528 cri.go:89] found id: ""
	I1028 12:59:18.756254  129528 logs.go:282] 0 containers: []
	W1028 12:59:18.756263  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:18.756273  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:18.756286  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:18.771177  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:18.771210  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:18.842969  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:18.842999  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:18.843017  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:18.940703  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:18.940744  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:18.976160  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:18.976193  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:21.527446  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:21.542423  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:21.542487  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:21.580847  129528 cri.go:89] found id: ""
	I1028 12:59:21.580894  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.580909  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:21.580919  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:21.580988  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:21.618623  129528 cri.go:89] found id: ""
	I1028 12:59:21.618656  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.618665  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:21.618677  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:21.618729  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:21.657982  129528 cri.go:89] found id: ""
	I1028 12:59:21.658012  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.658024  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:21.658031  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:21.658097  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:21.691274  129528 cri.go:89] found id: ""
	I1028 12:59:21.691303  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.691313  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:21.691325  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:21.691392  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:21.722787  129528 cri.go:89] found id: ""
	I1028 12:59:21.722815  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.722825  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:21.722834  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:21.722903  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:21.753902  129528 cri.go:89] found id: ""
	I1028 12:59:21.753938  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.753949  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:21.753958  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:21.754035  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:21.784086  129528 cri.go:89] found id: ""
	I1028 12:59:21.784119  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.784137  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:21.784146  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:21.784202  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:21.814525  129528 cri.go:89] found id: ""
	I1028 12:59:21.814552  129528 logs.go:282] 0 containers: []
	W1028 12:59:21.814563  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:21.814576  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:21.814591  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:21.847569  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:21.847606  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:21.900060  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:21.900101  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:21.913944  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:21.913990  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:21.986210  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:21.986240  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:21.986257  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:24.561266  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:24.577236  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:24.577305  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:24.618194  129528 cri.go:89] found id: ""
	I1028 12:59:24.618234  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.618247  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:24.618258  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:24.618332  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:24.670795  129528 cri.go:89] found id: ""
	I1028 12:59:24.670830  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.670837  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:24.670844  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:24.670896  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:24.711373  129528 cri.go:89] found id: ""
	I1028 12:59:24.711408  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.711419  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:24.711427  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:24.711497  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:24.743395  129528 cri.go:89] found id: ""
	I1028 12:59:24.743427  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.743437  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:24.743444  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:24.743515  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:24.772809  129528 cri.go:89] found id: ""
	I1028 12:59:24.772837  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.772845  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:24.772852  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:24.772902  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:24.807849  129528 cri.go:89] found id: ""
	I1028 12:59:24.807880  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.807890  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:24.807897  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:24.807945  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:24.840464  129528 cri.go:89] found id: ""
	I1028 12:59:24.840493  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.840500  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:24.840506  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:24.840561  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:24.871809  129528 cri.go:89] found id: ""
	I1028 12:59:24.871837  129528 logs.go:282] 0 containers: []
	W1028 12:59:24.871845  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:24.871854  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:24.871865  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:24.923730  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:24.923772  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:24.936167  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:24.936196  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:25.006578  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:25.006606  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:25.006621  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:25.081001  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:25.081039  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:27.621984  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:27.635040  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:27.635119  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:27.667543  129528 cri.go:89] found id: ""
	I1028 12:59:27.667572  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.667584  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:27.667594  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:27.667671  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:27.700477  129528 cri.go:89] found id: ""
	I1028 12:59:27.700514  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.700525  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:27.700533  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:27.700590  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:27.730015  129528 cri.go:89] found id: ""
	I1028 12:59:27.730040  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.730048  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:27.730054  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:27.730102  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:27.761346  129528 cri.go:89] found id: ""
	I1028 12:59:27.761371  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.761379  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:27.761387  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:27.761451  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:27.790976  129528 cri.go:89] found id: ""
	I1028 12:59:27.790999  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.791011  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:27.791017  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:27.791065  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:27.820427  129528 cri.go:89] found id: ""
	I1028 12:59:27.820461  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.820473  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:27.820481  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:27.820544  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:27.851078  129528 cri.go:89] found id: ""
	I1028 12:59:27.851102  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.851110  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:27.851116  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:27.851170  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:27.880333  129528 cri.go:89] found id: ""
	I1028 12:59:27.880368  129528 logs.go:282] 0 containers: []
	W1028 12:59:27.880378  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:27.880390  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:27.880405  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:27.931536  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:27.931572  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:27.943586  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:27.943614  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:28.010302  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:28.010330  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:28.010345  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:28.090112  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:28.090148  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:30.631465  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:30.646234  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:30.646307  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:30.678807  129528 cri.go:89] found id: ""
	I1028 12:59:30.678842  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.678854  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:30.678862  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:30.678921  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:30.711619  129528 cri.go:89] found id: ""
	I1028 12:59:30.711657  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.711667  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:30.711685  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:30.711749  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:30.743915  129528 cri.go:89] found id: ""
	I1028 12:59:30.744021  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.744046  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:30.744057  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:30.744131  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:30.777942  129528 cri.go:89] found id: ""
	I1028 12:59:30.777970  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.777979  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:30.777986  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:30.778045  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:30.811332  129528 cri.go:89] found id: ""
	I1028 12:59:30.811362  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.811373  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:30.811381  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:30.811451  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:30.845219  129528 cri.go:89] found id: ""
	I1028 12:59:30.845246  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.845253  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:30.845259  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:30.845323  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:30.875437  129528 cri.go:89] found id: ""
	I1028 12:59:30.875469  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.875477  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:30.875485  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:30.875575  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:30.905455  129528 cri.go:89] found id: ""
	I1028 12:59:30.905479  129528 logs.go:282] 0 containers: []
	W1028 12:59:30.905487  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:30.905495  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:30.905511  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:30.953627  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:30.953666  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:30.965872  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:30.965896  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:31.031081  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:31.031103  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:31.031114  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:31.108067  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:31.108103  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:33.643080  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:33.656920  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:33.657001  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:33.689502  129528 cri.go:89] found id: ""
	I1028 12:59:33.689554  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.689568  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:33.689579  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:33.689647  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:33.721470  129528 cri.go:89] found id: ""
	I1028 12:59:33.721497  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.721505  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:33.721512  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:33.721561  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:33.753417  129528 cri.go:89] found id: ""
	I1028 12:59:33.753450  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.753461  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:33.753469  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:33.753523  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:33.784463  129528 cri.go:89] found id: ""
	I1028 12:59:33.784489  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.784496  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:33.784504  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:33.784554  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:33.816162  129528 cri.go:89] found id: ""
	I1028 12:59:33.816192  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.816203  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:33.816212  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:33.816270  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:33.848881  129528 cri.go:89] found id: ""
	I1028 12:59:33.848910  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.848918  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:33.848925  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:33.848991  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:33.881596  129528 cri.go:89] found id: ""
	I1028 12:59:33.881630  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.881640  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:33.881649  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:33.881719  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:33.911553  129528 cri.go:89] found id: ""
	I1028 12:59:33.911591  129528 logs.go:282] 0 containers: []
	W1028 12:59:33.911604  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:33.911618  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:33.911653  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:33.960929  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:33.960959  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:33.974992  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:33.975021  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:34.043437  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:34.043463  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:34.043483  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:34.118504  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:34.118538  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:36.656276  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:36.669222  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:36.669285  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:36.703147  129528 cri.go:89] found id: ""
	I1028 12:59:36.703183  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.703194  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:36.703203  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:36.703257  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:36.733732  129528 cri.go:89] found id: ""
	I1028 12:59:36.733770  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.733782  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:36.733793  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:36.733868  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:36.771464  129528 cri.go:89] found id: ""
	I1028 12:59:36.771494  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.771503  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:36.771510  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:36.771577  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:36.807928  129528 cri.go:89] found id: ""
	I1028 12:59:36.807957  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.807967  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:36.807976  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:36.808047  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:36.841016  129528 cri.go:89] found id: ""
	I1028 12:59:36.841040  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.841048  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:36.841054  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:36.841107  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:36.878124  129528 cri.go:89] found id: ""
	I1028 12:59:36.878154  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.878162  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:36.878168  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:36.878233  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:36.909462  129528 cri.go:89] found id: ""
	I1028 12:59:36.909491  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.909503  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:36.909511  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:36.909594  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:36.945294  129528 cri.go:89] found id: ""
	I1028 12:59:36.945320  129528 logs.go:282] 0 containers: []
	W1028 12:59:36.945328  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:36.945338  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:36.945353  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:36.984715  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:36.984757  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:37.035369  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:37.035407  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:37.049075  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:37.049100  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:37.113287  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:37.113311  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:37.113327  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:39.693071  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:39.706640  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:39.706722  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:39.741369  129528 cri.go:89] found id: ""
	I1028 12:59:39.741402  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.741414  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:39.741422  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:39.741492  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:39.772707  129528 cri.go:89] found id: ""
	I1028 12:59:39.772744  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.772754  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:39.772764  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:39.772831  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:39.803421  129528 cri.go:89] found id: ""
	I1028 12:59:39.803456  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.803467  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:39.803477  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:39.803575  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:39.835142  129528 cri.go:89] found id: ""
	I1028 12:59:39.835176  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.835188  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:39.835196  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:39.835259  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:39.868634  129528 cri.go:89] found id: ""
	I1028 12:59:39.868668  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.868680  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:39.868689  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:39.868761  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:39.905353  129528 cri.go:89] found id: ""
	I1028 12:59:39.905382  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.905392  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:39.905401  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:39.905459  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:39.940913  129528 cri.go:89] found id: ""
	I1028 12:59:39.940943  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.940952  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:39.940958  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:39.941014  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:39.975378  129528 cri.go:89] found id: ""
	I1028 12:59:39.975404  129528 logs.go:282] 0 containers: []
	W1028 12:59:39.975415  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:39.975425  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:39.975438  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:40.049742  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:40.049768  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:40.049782  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:40.128127  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:40.128168  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:40.164957  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:40.164987  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:40.214604  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:40.214644  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:42.728348  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:42.741169  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:42.741260  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:42.772360  129528 cri.go:89] found id: ""
	I1028 12:59:42.772391  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.772401  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:42.772411  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:42.772476  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:42.806539  129528 cri.go:89] found id: ""
	I1028 12:59:42.806576  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.806586  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:42.806593  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:42.806661  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:42.840609  129528 cri.go:89] found id: ""
	I1028 12:59:42.840641  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.840654  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:42.840663  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:42.840727  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:42.874264  129528 cri.go:89] found id: ""
	I1028 12:59:42.874292  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.874304  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:42.874310  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:42.874374  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:42.905437  129528 cri.go:89] found id: ""
	I1028 12:59:42.905470  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.905482  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:42.905491  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:42.905560  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:42.937257  129528 cri.go:89] found id: ""
	I1028 12:59:42.937296  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.937311  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:42.937322  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:42.937388  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:42.967253  129528 cri.go:89] found id: ""
	I1028 12:59:42.967285  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.967297  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:42.967313  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:42.967378  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:42.999092  129528 cri.go:89] found id: ""
	I1028 12:59:42.999135  129528 logs.go:282] 0 containers: []
	W1028 12:59:42.999152  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:42.999165  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:42.999181  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:43.047032  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:43.047073  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:43.059429  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:43.059457  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:43.120981  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:43.121011  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:43.121025  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:43.199809  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:43.199845  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:45.735581  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:45.748493  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:45.748569  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:45.781413  129528 cri.go:89] found id: ""
	I1028 12:59:45.781447  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.781459  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:45.781467  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:45.781541  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:45.813843  129528 cri.go:89] found id: ""
	I1028 12:59:45.813873  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.813884  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:45.813892  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:45.813955  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:45.846055  129528 cri.go:89] found id: ""
	I1028 12:59:45.846093  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.846105  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:45.846115  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:45.846184  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:45.876671  129528 cri.go:89] found id: ""
	I1028 12:59:45.876706  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.876719  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:45.876727  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:45.876794  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:45.912928  129528 cri.go:89] found id: ""
	I1028 12:59:45.912958  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.912969  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:45.912977  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:45.913044  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:45.944171  129528 cri.go:89] found id: ""
	I1028 12:59:45.944201  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.944214  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:45.944223  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:45.944290  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:45.974952  129528 cri.go:89] found id: ""
	I1028 12:59:45.974982  129528 logs.go:282] 0 containers: []
	W1028 12:59:45.974994  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:45.975003  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:45.975064  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:46.010682  129528 cri.go:89] found id: ""
	I1028 12:59:46.010710  129528 logs.go:282] 0 containers: []
	W1028 12:59:46.010720  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:46.010733  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:46.010751  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:46.051883  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:46.051922  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:46.101302  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:46.101342  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:46.113374  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:46.113409  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:46.183256  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:46.183287  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:46.183306  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:48.764509  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:48.777048  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:48.777109  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:48.809016  129528 cri.go:89] found id: ""
	I1028 12:59:48.809041  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.809052  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:48.809060  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:48.809122  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:48.840138  129528 cri.go:89] found id: ""
	I1028 12:59:48.840169  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.840178  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:48.840186  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:48.840241  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:48.877506  129528 cri.go:89] found id: ""
	I1028 12:59:48.877532  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.877540  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:48.877547  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:48.877624  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:48.910432  129528 cri.go:89] found id: ""
	I1028 12:59:48.910457  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.910466  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:48.910472  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:48.910531  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:48.940220  129528 cri.go:89] found id: ""
	I1028 12:59:48.940248  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.940257  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:48.940264  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:48.940325  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:48.974835  129528 cri.go:89] found id: ""
	I1028 12:59:48.974860  129528 logs.go:282] 0 containers: []
	W1028 12:59:48.974867  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:48.974873  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:48.974936  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:49.009665  129528 cri.go:89] found id: ""
	I1028 12:59:49.009695  129528 logs.go:282] 0 containers: []
	W1028 12:59:49.009703  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:49.009710  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:49.009760  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:49.038120  129528 cri.go:89] found id: ""
	I1028 12:59:49.038145  129528 logs.go:282] 0 containers: []
	W1028 12:59:49.038153  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:49.038163  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:49.038178  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:49.075870  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:49.075897  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:49.126276  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:49.126310  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:49.138545  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:49.138567  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:49.200557  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:49.200581  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:49.200593  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:51.778860  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:51.791694  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:51.791775  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:51.828336  129528 cri.go:89] found id: ""
	I1028 12:59:51.828366  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.828376  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:51.828385  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:51.828459  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:51.859480  129528 cri.go:89] found id: ""
	I1028 12:59:51.859517  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.859529  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:51.859538  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:51.859602  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:51.889508  129528 cri.go:89] found id: ""
	I1028 12:59:51.889545  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.889577  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:51.889584  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:51.889637  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:51.921874  129528 cri.go:89] found id: ""
	I1028 12:59:51.921909  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.921921  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:51.921928  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:51.921989  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:51.954935  129528 cri.go:89] found id: ""
	I1028 12:59:51.954968  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.954980  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:51.954989  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:51.955057  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:51.993347  129528 cri.go:89] found id: ""
	I1028 12:59:51.993382  129528 logs.go:282] 0 containers: []
	W1028 12:59:51.993394  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:51.993404  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:51.993472  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:52.026363  129528 cri.go:89] found id: ""
	I1028 12:59:52.026390  129528 logs.go:282] 0 containers: []
	W1028 12:59:52.026398  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:52.026404  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:52.026457  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:52.063183  129528 cri.go:89] found id: ""
	I1028 12:59:52.063221  129528 logs.go:282] 0 containers: []
	W1028 12:59:52.063235  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:52.063250  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:52.063266  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:52.113308  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:52.113347  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:52.126650  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:52.126684  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:52.196424  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:52.196452  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:52.196469  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:52.276189  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:52.276236  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:54.818002  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:54.830898  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:54.830956  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:54.864312  129528 cri.go:89] found id: ""
	I1028 12:59:54.864340  129528 logs.go:282] 0 containers: []
	W1028 12:59:54.864348  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:54.864355  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:54.864422  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:54.897891  129528 cri.go:89] found id: ""
	I1028 12:59:54.897922  129528 logs.go:282] 0 containers: []
	W1028 12:59:54.897934  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:54.897943  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:54.898007  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:54.931905  129528 cri.go:89] found id: ""
	I1028 12:59:54.931939  129528 logs.go:282] 0 containers: []
	W1028 12:59:54.931949  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:54.931958  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:54.932023  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:54.962899  129528 cri.go:89] found id: ""
	I1028 12:59:54.962924  129528 logs.go:282] 0 containers: []
	W1028 12:59:54.962935  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:54.962944  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:54.963003  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:54.994625  129528 cri.go:89] found id: ""
	I1028 12:59:54.994659  129528 logs.go:282] 0 containers: []
	W1028 12:59:54.994674  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:54.994682  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:54.994747  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:55.024630  129528 cri.go:89] found id: ""
	I1028 12:59:55.024663  129528 logs.go:282] 0 containers: []
	W1028 12:59:55.024674  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:55.024683  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:55.024746  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:55.056244  129528 cri.go:89] found id: ""
	I1028 12:59:55.056271  129528 logs.go:282] 0 containers: []
	W1028 12:59:55.056279  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:55.056286  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:55.056338  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:55.100185  129528 cri.go:89] found id: ""
	I1028 12:59:55.100222  129528 logs.go:282] 0 containers: []
	W1028 12:59:55.100234  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:55.100246  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:55.100262  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:55.150051  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:55.150089  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:55.164422  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:55.164449  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:55.236446  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:59:55.236474  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:55.236492  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:55.312828  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:55.312866  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:57.847038  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:59:57.860849  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:59:57.860919  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:59:57.899158  129528 cri.go:89] found id: ""
	I1028 12:59:57.899190  129528 logs.go:282] 0 containers: []
	W1028 12:59:57.899201  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:59:57.899210  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:59:57.899272  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:59:57.932307  129528 cri.go:89] found id: ""
	I1028 12:59:57.932338  129528 logs.go:282] 0 containers: []
	W1028 12:59:57.932349  129528 logs.go:284] No container was found matching "etcd"
	I1028 12:59:57.932358  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:59:57.932424  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:59:57.965626  129528 cri.go:89] found id: ""
	I1028 12:59:57.965659  129528 logs.go:282] 0 containers: []
	W1028 12:59:57.965670  129528 logs.go:284] No container was found matching "coredns"
	I1028 12:59:57.965679  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:59:57.965752  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:59:57.994838  129528 cri.go:89] found id: ""
	I1028 12:59:57.994862  129528 logs.go:282] 0 containers: []
	W1028 12:59:57.994870  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:59:57.994876  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:59:57.994929  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:59:58.024648  129528 cri.go:89] found id: ""
	I1028 12:59:58.024683  129528 logs.go:282] 0 containers: []
	W1028 12:59:58.024695  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:59:58.024705  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:59:58.024787  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:59:58.060118  129528 cri.go:89] found id: ""
	I1028 12:59:58.060143  129528 logs.go:282] 0 containers: []
	W1028 12:59:58.060151  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:59:58.060157  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:59:58.060212  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:59:58.092720  129528 cri.go:89] found id: ""
	I1028 12:59:58.092747  129528 logs.go:282] 0 containers: []
	W1028 12:59:58.092758  129528 logs.go:284] No container was found matching "kindnet"
	I1028 12:59:58.092766  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:59:58.092842  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:59:58.124117  129528 cri.go:89] found id: ""
	I1028 12:59:58.124152  129528 logs.go:282] 0 containers: []
	W1028 12:59:58.124166  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:59:58.124180  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:59:58.124197  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:59:58.203443  129528 logs.go:123] Gathering logs for container status ...
	I1028 12:59:58.203484  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:59:58.238589  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 12:59:58.238624  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:59:58.290759  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 12:59:58.290789  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:59:58.303308  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:59:58.303348  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:59:58.366619  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:00.867396  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:00.879968  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:00.880034  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:00.910538  129528 cri.go:89] found id: ""
	I1028 13:00:00.910568  129528 logs.go:282] 0 containers: []
	W1028 13:00:00.910579  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:00.910586  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:00.910647  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:00.942526  129528 cri.go:89] found id: ""
	I1028 13:00:00.942553  129528 logs.go:282] 0 containers: []
	W1028 13:00:00.942561  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:00.942567  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:00.942619  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:00.972256  129528 cri.go:89] found id: ""
	I1028 13:00:00.972288  129528 logs.go:282] 0 containers: []
	W1028 13:00:00.972298  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:00.972305  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:00.972361  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:01.003028  129528 cri.go:89] found id: ""
	I1028 13:00:01.003060  129528 logs.go:282] 0 containers: []
	W1028 13:00:01.003070  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:01.003076  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:01.003128  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:01.035109  129528 cri.go:89] found id: ""
	I1028 13:00:01.035144  129528 logs.go:282] 0 containers: []
	W1028 13:00:01.035155  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:01.035164  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:01.035225  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:01.066518  129528 cri.go:89] found id: ""
	I1028 13:00:01.066551  129528 logs.go:282] 0 containers: []
	W1028 13:00:01.066562  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:01.066580  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:01.066652  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:01.099730  129528 cri.go:89] found id: ""
	I1028 13:00:01.099763  129528 logs.go:282] 0 containers: []
	W1028 13:00:01.099774  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:01.099790  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:01.099848  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:01.135208  129528 cri.go:89] found id: ""
	I1028 13:00:01.135243  129528 logs.go:282] 0 containers: []
	W1028 13:00:01.135255  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:01.135268  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:01.135283  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:01.217936  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:01.217989  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:01.264397  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:01.264436  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:01.316373  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:01.316409  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:01.329425  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:01.329454  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:01.396315  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:03.896628  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:03.910692  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:03.910771  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:03.946265  129528 cri.go:89] found id: ""
	I1028 13:00:03.946295  129528 logs.go:282] 0 containers: []
	W1028 13:00:03.946304  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:03.946310  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:03.946375  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:03.980751  129528 cri.go:89] found id: ""
	I1028 13:00:03.980801  129528 logs.go:282] 0 containers: []
	W1028 13:00:03.980813  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:03.980821  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:03.980890  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:04.013702  129528 cri.go:89] found id: ""
	I1028 13:00:04.013738  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.013751  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:04.013758  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:04.013826  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:04.043855  129528 cri.go:89] found id: ""
	I1028 13:00:04.043892  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.043903  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:04.043913  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:04.043977  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:04.072631  129528 cri.go:89] found id: ""
	I1028 13:00:04.072665  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.072680  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:04.072689  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:04.072754  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:04.104126  129528 cri.go:89] found id: ""
	I1028 13:00:04.104156  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.104167  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:04.104175  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:04.104250  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:04.136393  129528 cri.go:89] found id: ""
	I1028 13:00:04.136424  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.136435  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:04.136442  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:04.136494  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:04.170165  129528 cri.go:89] found id: ""
	I1028 13:00:04.170187  129528 logs.go:282] 0 containers: []
	W1028 13:00:04.170195  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:04.170203  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:04.170215  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:04.183115  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:04.183143  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:04.263702  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:04.263727  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:04.263749  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:04.343475  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:04.343516  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:04.380974  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:04.381002  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:06.930609  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:06.943315  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:06.943396  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:06.976654  129528 cri.go:89] found id: ""
	I1028 13:00:06.976681  129528 logs.go:282] 0 containers: []
	W1028 13:00:06.976689  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:06.976695  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:06.976752  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:07.006740  129528 cri.go:89] found id: ""
	I1028 13:00:07.006771  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.006780  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:07.006786  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:07.006843  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:07.038347  129528 cri.go:89] found id: ""
	I1028 13:00:07.038388  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.038398  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:07.038416  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:07.038491  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:07.069001  129528 cri.go:89] found id: ""
	I1028 13:00:07.069027  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.069035  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:07.069042  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:07.069111  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:07.099575  129528 cri.go:89] found id: ""
	I1028 13:00:07.099603  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.099612  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:07.099618  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:07.099686  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:07.141254  129528 cri.go:89] found id: ""
	I1028 13:00:07.141290  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.141300  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:07.141307  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:07.141374  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:07.177990  129528 cri.go:89] found id: ""
	I1028 13:00:07.178024  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.178034  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:07.178043  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:07.178113  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:07.212233  129528 cri.go:89] found id: ""
	I1028 13:00:07.212275  129528 logs.go:282] 0 containers: []
	W1028 13:00:07.212287  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:07.212298  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:07.212314  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:07.278892  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:07.278914  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:07.278928  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:07.356927  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:07.356971  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:07.397540  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:07.397573  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:07.448137  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:07.448175  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:09.960988  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:09.973728  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:09.973858  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:10.006192  129528 cri.go:89] found id: ""
	I1028 13:00:10.006224  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.006236  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:10.006248  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:10.006329  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:10.036100  129528 cri.go:89] found id: ""
	I1028 13:00:10.036132  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.036140  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:10.036150  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:10.036203  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:10.066594  129528 cri.go:89] found id: ""
	I1028 13:00:10.066628  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.066636  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:10.066643  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:10.066692  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:10.097598  129528 cri.go:89] found id: ""
	I1028 13:00:10.097627  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.097637  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:10.097644  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:10.097694  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:10.129763  129528 cri.go:89] found id: ""
	I1028 13:00:10.129788  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.129798  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:10.129806  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:10.129872  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:10.164082  129528 cri.go:89] found id: ""
	I1028 13:00:10.164118  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.164127  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:10.164134  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:10.164186  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:10.196389  129528 cri.go:89] found id: ""
	I1028 13:00:10.196423  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.196434  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:10.196443  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:10.196499  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:10.229590  129528 cri.go:89] found id: ""
	I1028 13:00:10.229630  129528 logs.go:282] 0 containers: []
	W1028 13:00:10.229643  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:10.229659  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:10.229679  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:10.279474  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:10.279509  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:10.292114  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:10.292143  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:10.356572  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:10.356604  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:10.356619  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:10.436224  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:10.436265  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:12.972705  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:12.986436  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:12.986498  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:13.019834  129528 cri.go:89] found id: ""
	I1028 13:00:13.019868  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.019879  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:13.019887  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:13.019953  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:13.051717  129528 cri.go:89] found id: ""
	I1028 13:00:13.051750  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.051762  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:13.051771  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:13.051847  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:13.082652  129528 cri.go:89] found id: ""
	I1028 13:00:13.082679  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.082687  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:13.082694  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:13.082744  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:13.113838  129528 cri.go:89] found id: ""
	I1028 13:00:13.113866  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.113874  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:13.113880  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:13.113930  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:13.144416  129528 cri.go:89] found id: ""
	I1028 13:00:13.144445  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.144453  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:13.144460  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:13.144511  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:13.177941  129528 cri.go:89] found id: ""
	I1028 13:00:13.177977  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.177988  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:13.177997  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:13.178054  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:13.213200  129528 cri.go:89] found id: ""
	I1028 13:00:13.213234  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.213247  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:13.213253  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:13.213321  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:13.245638  129528 cri.go:89] found id: ""
	I1028 13:00:13.245664  129528 logs.go:282] 0 containers: []
	W1028 13:00:13.245673  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:13.245682  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:13.245697  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:13.295203  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:13.295243  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:13.307336  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:13.307375  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:13.377544  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:13.377590  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:13.377610  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:13.451810  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:13.451852  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:15.992889  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:16.005344  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:16.005406  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:16.038728  129528 cri.go:89] found id: ""
	I1028 13:00:16.038758  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.038769  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:16.038777  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:16.038839  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:16.071806  129528 cri.go:89] found id: ""
	I1028 13:00:16.071838  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.071850  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:16.071865  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:16.071926  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:16.107324  129528 cri.go:89] found id: ""
	I1028 13:00:16.107351  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.107359  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:16.107370  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:16.107428  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:16.139438  129528 cri.go:89] found id: ""
	I1028 13:00:16.139466  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.139477  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:16.139486  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:16.139706  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:16.171803  129528 cri.go:89] found id: ""
	I1028 13:00:16.171841  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.171853  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:16.171861  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:16.171925  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:16.205062  129528 cri.go:89] found id: ""
	I1028 13:00:16.205090  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.205099  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:16.205108  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:16.205171  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:16.236526  129528 cri.go:89] found id: ""
	I1028 13:00:16.236555  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.236562  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:16.236568  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:16.236627  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:16.267977  129528 cri.go:89] found id: ""
	I1028 13:00:16.268005  129528 logs.go:282] 0 containers: []
	W1028 13:00:16.268016  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:16.268028  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:16.268043  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:16.318634  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:16.318668  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:16.334754  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:16.334785  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:16.435282  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:16.435312  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:16.435325  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:16.512306  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:16.512343  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:19.049659  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:19.062206  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:19.062263  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:19.093771  129528 cri.go:89] found id: ""
	I1028 13:00:19.093800  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.093815  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:19.093824  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:19.093885  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:19.124976  129528 cri.go:89] found id: ""
	I1028 13:00:19.125009  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.125023  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:19.125033  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:19.125108  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:19.158104  129528 cri.go:89] found id: ""
	I1028 13:00:19.158134  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.158145  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:19.158153  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:19.158223  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:19.194507  129528 cri.go:89] found id: ""
	I1028 13:00:19.194532  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.194543  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:19.194550  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:19.194612  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:19.226563  129528 cri.go:89] found id: ""
	I1028 13:00:19.226591  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.226600  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:19.226607  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:19.226664  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:19.259382  129528 cri.go:89] found id: ""
	I1028 13:00:19.259411  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.259423  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:19.259431  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:19.259499  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:19.294825  129528 cri.go:89] found id: ""
	I1028 13:00:19.294860  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.294871  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:19.294879  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:19.294942  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:19.330110  129528 cri.go:89] found id: ""
	I1028 13:00:19.330144  129528 logs.go:282] 0 containers: []
	W1028 13:00:19.330163  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:19.330249  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:19.330294  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:19.370161  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:19.370187  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:19.419368  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:19.419403  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:19.431861  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:19.431888  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:19.498528  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:19.498552  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:19.498569  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:22.083751  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:22.102950  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:22.103014  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:22.156632  129528 cri.go:89] found id: ""
	I1028 13:00:22.156662  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.156673  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:22.156681  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:22.156747  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:22.219995  129528 cri.go:89] found id: ""
	I1028 13:00:22.220027  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.220039  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:22.220048  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:22.220115  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:22.252165  129528 cri.go:89] found id: ""
	I1028 13:00:22.252198  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.252210  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:22.252218  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:22.252288  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:22.284287  129528 cri.go:89] found id: ""
	I1028 13:00:22.284322  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.284336  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:22.284349  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:22.284420  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:22.316279  129528 cri.go:89] found id: ""
	I1028 13:00:22.316315  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.316328  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:22.316340  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:22.316410  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:22.351507  129528 cri.go:89] found id: ""
	I1028 13:00:22.351539  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.351556  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:22.351566  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:22.351645  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:22.387501  129528 cri.go:89] found id: ""
	I1028 13:00:22.387528  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.387537  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:22.387555  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:22.387605  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:22.429368  129528 cri.go:89] found id: ""
	I1028 13:00:22.429394  129528 logs.go:282] 0 containers: []
	W1028 13:00:22.429402  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:22.429412  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:22.429426  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:22.467200  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:22.467238  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:22.516309  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:22.516343  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:22.529422  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:22.529454  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:22.604637  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:22.604666  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:22.604682  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:25.183951  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:25.196710  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:25.196795  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:25.231522  129528 cri.go:89] found id: ""
	I1028 13:00:25.231549  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.231557  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:25.231563  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:25.231670  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:25.264973  129528 cri.go:89] found id: ""
	I1028 13:00:25.265004  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.265016  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:25.265024  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:25.265099  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:25.295576  129528 cri.go:89] found id: ""
	I1028 13:00:25.295609  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.295621  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:25.295641  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:25.295713  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:25.326754  129528 cri.go:89] found id: ""
	I1028 13:00:25.326798  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.326808  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:25.326815  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:25.326873  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:25.357073  129528 cri.go:89] found id: ""
	I1028 13:00:25.357103  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.357111  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:25.357118  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:25.357181  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:25.392180  129528 cri.go:89] found id: ""
	I1028 13:00:25.392207  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.392215  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:25.392222  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:25.392274  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:25.422507  129528 cri.go:89] found id: ""
	I1028 13:00:25.422539  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.422547  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:25.422554  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:25.422604  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:25.462780  129528 cri.go:89] found id: ""
	I1028 13:00:25.462807  129528 logs.go:282] 0 containers: []
	W1028 13:00:25.462814  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:25.462824  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:25.462842  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:25.512993  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:25.513034  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:25.525712  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:25.525740  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:25.586933  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:25.586953  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:25.586967  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:25.665000  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:25.665035  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:28.202343  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:28.214598  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:28.214654  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:28.244471  129528 cri.go:89] found id: ""
	I1028 13:00:28.244501  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.244509  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:28.244516  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:28.244574  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:28.277014  129528 cri.go:89] found id: ""
	I1028 13:00:28.277048  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.277057  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:28.277064  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:28.277132  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:28.307540  129528 cri.go:89] found id: ""
	I1028 13:00:28.307582  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.307594  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:28.307602  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:28.307693  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:28.342640  129528 cri.go:89] found id: ""
	I1028 13:00:28.342672  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.342683  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:28.342692  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:28.342748  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:28.374304  129528 cri.go:89] found id: ""
	I1028 13:00:28.374332  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.374340  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:28.374347  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:28.374404  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:28.407355  129528 cri.go:89] found id: ""
	I1028 13:00:28.407383  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.407394  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:28.407402  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:28.407467  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:28.438445  129528 cri.go:89] found id: ""
	I1028 13:00:28.438477  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.438488  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:28.438496  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:28.438558  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:28.469121  129528 cri.go:89] found id: ""
	I1028 13:00:28.469167  129528 logs.go:282] 0 containers: []
	W1028 13:00:28.469176  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:28.469186  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:28.469203  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:28.520200  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:28.520238  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:28.532363  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:28.532390  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:28.599514  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:28.599541  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:28.599553  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:28.677715  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:28.677752  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:31.217080  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:31.230645  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:31.230707  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:31.261565  129528 cri.go:89] found id: ""
	I1028 13:00:31.261606  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.261617  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:31.261624  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:31.261696  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:31.293054  129528 cri.go:89] found id: ""
	I1028 13:00:31.293096  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.293108  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:31.293116  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:31.293181  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:31.323432  129528 cri.go:89] found id: ""
	I1028 13:00:31.323458  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.323467  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:31.323475  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:31.323531  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:31.354852  129528 cri.go:89] found id: ""
	I1028 13:00:31.354889  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.354901  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:31.354909  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:31.354981  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:31.385890  129528 cri.go:89] found id: ""
	I1028 13:00:31.385924  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.385936  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:31.385945  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:31.386014  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:31.416179  129528 cri.go:89] found id: ""
	I1028 13:00:31.416212  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.416221  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:31.416228  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:31.416292  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:31.446678  129528 cri.go:89] found id: ""
	I1028 13:00:31.446708  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.446716  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:31.446723  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:31.446786  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:31.478329  129528 cri.go:89] found id: ""
	I1028 13:00:31.478354  129528 logs.go:282] 0 containers: []
	W1028 13:00:31.478361  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:31.478370  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:31.478384  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:31.529616  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:31.529652  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:31.542144  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:31.542174  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:31.609216  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:31.609243  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:31.609261  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:31.695496  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:31.695542  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:34.235486  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:34.247683  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:34.247758  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:34.278390  129528 cri.go:89] found id: ""
	I1028 13:00:34.278424  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.278433  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:34.278440  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:34.278491  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:34.311140  129528 cri.go:89] found id: ""
	I1028 13:00:34.311169  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.311178  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:34.311185  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:34.311248  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:34.340644  129528 cri.go:89] found id: ""
	I1028 13:00:34.340675  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.340682  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:34.340691  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:34.340759  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:34.370829  129528 cri.go:89] found id: ""
	I1028 13:00:34.370857  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.370865  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:34.370874  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:34.370936  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:34.402871  129528 cri.go:89] found id: ""
	I1028 13:00:34.402904  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.402915  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:34.402924  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:34.402992  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:34.432826  129528 cri.go:89] found id: ""
	I1028 13:00:34.432859  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.432867  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:34.432873  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:34.432935  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:34.468545  129528 cri.go:89] found id: ""
	I1028 13:00:34.468583  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.468594  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:34.468603  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:34.468667  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:34.506677  129528 cri.go:89] found id: ""
	I1028 13:00:34.506712  129528 logs.go:282] 0 containers: []
	W1028 13:00:34.506720  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:34.506732  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:34.506746  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:34.584198  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:34.584238  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:34.622943  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:34.622980  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:34.673273  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:34.673310  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:34.686814  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:34.686850  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:34.755673  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:37.256857  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:37.269006  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:37.269077  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:37.305158  129528 cri.go:89] found id: ""
	I1028 13:00:37.305183  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.305191  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:37.305197  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:37.305253  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:37.336135  129528 cri.go:89] found id: ""
	I1028 13:00:37.336169  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.336181  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:37.336189  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:37.336246  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:37.366077  129528 cri.go:89] found id: ""
	I1028 13:00:37.366106  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.366114  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:37.366120  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:37.366171  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:37.399896  129528 cri.go:89] found id: ""
	I1028 13:00:37.399925  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.399933  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:37.399940  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:37.400004  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:37.430763  129528 cri.go:89] found id: ""
	I1028 13:00:37.430795  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.430809  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:37.430817  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:37.430880  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:37.461448  129528 cri.go:89] found id: ""
	I1028 13:00:37.461474  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.461483  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:37.461489  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:37.461538  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:37.492891  129528 cri.go:89] found id: ""
	I1028 13:00:37.492915  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.492922  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:37.492929  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:37.492988  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:37.523841  129528 cri.go:89] found id: ""
	I1028 13:00:37.523869  129528 logs.go:282] 0 containers: []
	W1028 13:00:37.523878  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:37.523888  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:37.523899  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:37.573803  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:37.573838  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:37.587123  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:37.587152  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:37.659948  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:37.659975  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:37.659988  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:37.746094  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:37.746131  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:40.283382  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:40.295547  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:40.295617  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:40.325698  129528 cri.go:89] found id: ""
	I1028 13:00:40.325723  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.325731  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:40.325738  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:40.325788  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:40.356319  129528 cri.go:89] found id: ""
	I1028 13:00:40.356349  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.356361  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:40.356369  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:40.356433  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:40.390436  129528 cri.go:89] found id: ""
	I1028 13:00:40.390463  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.390471  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:40.390477  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:40.390539  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:40.427140  129528 cri.go:89] found id: ""
	I1028 13:00:40.427176  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.427187  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:40.427197  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:40.427267  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:40.471714  129528 cri.go:89] found id: ""
	I1028 13:00:40.471748  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.471760  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:40.471769  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:40.471826  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:40.503190  129528 cri.go:89] found id: ""
	I1028 13:00:40.503221  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.503232  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:40.503240  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:40.503310  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:40.542897  129528 cri.go:89] found id: ""
	I1028 13:00:40.542932  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.542954  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:40.542964  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:40.543034  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:40.574358  129528 cri.go:89] found id: ""
	I1028 13:00:40.574397  129528 logs.go:282] 0 containers: []
	W1028 13:00:40.574411  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:40.574425  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:40.574444  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:40.651920  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:40.651949  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:40.651967  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:40.733567  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:40.733609  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:40.775981  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:40.776010  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:40.826223  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:40.826260  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:43.339676  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:43.352805  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:43.352881  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:43.384866  129528 cri.go:89] found id: ""
	I1028 13:00:43.384896  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.384906  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:43.384913  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:43.384970  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:43.414875  129528 cri.go:89] found id: ""
	I1028 13:00:43.414912  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.414924  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:43.414932  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:43.414999  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:43.445153  129528 cri.go:89] found id: ""
	I1028 13:00:43.445182  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.445189  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:43.445197  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:43.445261  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:43.476814  129528 cri.go:89] found id: ""
	I1028 13:00:43.476840  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.476849  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:43.476855  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:43.476921  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:43.511486  129528 cri.go:89] found id: ""
	I1028 13:00:43.511514  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.511522  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:43.511529  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:43.511590  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:43.542164  129528 cri.go:89] found id: ""
	I1028 13:00:43.542192  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.542202  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:43.542210  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:43.542276  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:43.572783  129528 cri.go:89] found id: ""
	I1028 13:00:43.572810  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.572819  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:43.572825  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:43.572879  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:43.612859  129528 cri.go:89] found id: ""
	I1028 13:00:43.612890  129528 logs.go:282] 0 containers: []
	W1028 13:00:43.612899  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:43.612909  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:43.612928  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:43.665857  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:43.665891  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:43.678461  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:43.678489  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:43.749367  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:43.749389  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:43.749403  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:43.822478  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:43.822520  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:46.364874  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:46.377705  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:46.377810  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:46.408518  129528 cri.go:89] found id: ""
	I1028 13:00:46.408557  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.408583  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:46.408592  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:46.408659  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:46.441399  129528 cri.go:89] found id: ""
	I1028 13:00:46.441439  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.441451  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:46.441459  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:46.441524  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:46.474440  129528 cri.go:89] found id: ""
	I1028 13:00:46.474476  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.474486  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:46.474493  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:46.474561  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:46.505200  129528 cri.go:89] found id: ""
	I1028 13:00:46.505232  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.505243  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:46.505252  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:46.505312  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:46.536330  129528 cri.go:89] found id: ""
	I1028 13:00:46.536363  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.536374  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:46.536383  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:46.536447  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:46.568545  129528 cri.go:89] found id: ""
	I1028 13:00:46.568578  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.568590  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:46.568599  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:46.568664  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:46.600221  129528 cri.go:89] found id: ""
	I1028 13:00:46.600255  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.600265  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:46.600274  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:46.600336  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:46.632767  129528 cri.go:89] found id: ""
	I1028 13:00:46.632811  129528 logs.go:282] 0 containers: []
	W1028 13:00:46.632823  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:46.632836  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:46.632853  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:46.684836  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:46.684870  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:46.698797  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:46.698828  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:46.768941  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:46.768971  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:46.768989  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:46.841931  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:46.841968  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:49.379151  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:49.391321  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:49.391386  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:49.423649  129528 cri.go:89] found id: ""
	I1028 13:00:49.423685  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.423697  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:49.423707  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:49.423773  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:49.463176  129528 cri.go:89] found id: ""
	I1028 13:00:49.463207  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.463219  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:49.463228  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:49.463294  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:49.495968  129528 cri.go:89] found id: ""
	I1028 13:00:49.495998  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.496007  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:49.496013  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:49.496062  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:49.526259  129528 cri.go:89] found id: ""
	I1028 13:00:49.526283  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.526292  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:49.526298  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:49.526349  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:49.556778  129528 cri.go:89] found id: ""
	I1028 13:00:49.556815  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.556828  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:49.556836  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:49.556910  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:49.588257  129528 cri.go:89] found id: ""
	I1028 13:00:49.588293  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.588305  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:49.588314  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:49.588367  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:49.623133  129528 cri.go:89] found id: ""
	I1028 13:00:49.623163  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.623172  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:49.623179  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:49.623233  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:49.655695  129528 cri.go:89] found id: ""
	I1028 13:00:49.655734  129528 logs.go:282] 0 containers: []
	W1028 13:00:49.655746  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:49.655759  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:49.655775  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:49.734672  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:49.734708  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:49.774307  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:49.774334  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:49.825007  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:49.825039  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:49.837940  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:49.837970  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:49.907248  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:52.407493  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:52.419789  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:52.419864  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:52.454048  129528 cri.go:89] found id: ""
	I1028 13:00:52.454072  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.454080  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:52.454091  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:52.454144  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:52.485347  129528 cri.go:89] found id: ""
	I1028 13:00:52.485383  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.485395  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:52.485403  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:52.485471  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:52.518548  129528 cri.go:89] found id: ""
	I1028 13:00:52.518579  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.518588  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:52.518594  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:52.518664  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:52.549957  129528 cri.go:89] found id: ""
	I1028 13:00:52.549990  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.550002  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:52.550011  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:52.550076  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:52.581459  129528 cri.go:89] found id: ""
	I1028 13:00:52.581494  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.581507  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:52.581516  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:52.581583  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:52.612915  129528 cri.go:89] found id: ""
	I1028 13:00:52.612946  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.612958  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:52.612967  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:52.613033  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:52.645354  129528 cri.go:89] found id: ""
	I1028 13:00:52.645388  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.645399  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:52.645407  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:52.645475  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:52.677930  129528 cri.go:89] found id: ""
	I1028 13:00:52.677965  129528 logs.go:282] 0 containers: []
	W1028 13:00:52.677977  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:52.677989  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:52.678003  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:52.731003  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:52.731051  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:52.745153  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:52.745193  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:52.813446  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:52.813485  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:52.813502  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:52.887152  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:52.887189  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:55.424639  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:55.437853  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:55.437922  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:55.470937  129528 cri.go:89] found id: ""
	I1028 13:00:55.470968  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.470980  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:55.470989  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:55.471055  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:55.503805  129528 cri.go:89] found id: ""
	I1028 13:00:55.503834  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.503845  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:55.503854  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:55.503928  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:55.534509  129528 cri.go:89] found id: ""
	I1028 13:00:55.534541  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.534549  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:55.534560  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:55.534615  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:55.564061  129528 cri.go:89] found id: ""
	I1028 13:00:55.564100  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.564112  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:55.564121  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:55.564181  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:55.593485  129528 cri.go:89] found id: ""
	I1028 13:00:55.593513  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.593526  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:55.593533  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:55.593587  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:55.625057  129528 cri.go:89] found id: ""
	I1028 13:00:55.625087  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.625098  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:55.625106  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:55.625174  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:55.655319  129528 cri.go:89] found id: ""
	I1028 13:00:55.655351  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.655364  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:55.655374  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:55.655442  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:55.690374  129528 cri.go:89] found id: ""
	I1028 13:00:55.690405  129528 logs.go:282] 0 containers: []
	W1028 13:00:55.690416  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:55.690427  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:55.690443  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:55.763719  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:55.763756  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:55.803884  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:55.803919  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:55.852881  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:55.852919  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:00:55.864808  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:55.864840  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:55.933783  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:58.434200  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:00:58.446341  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:00:58.446416  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:00:58.477549  129528 cri.go:89] found id: ""
	I1028 13:00:58.477586  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.477598  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:00:58.477608  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:00:58.477672  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:00:58.508162  129528 cri.go:89] found id: ""
	I1028 13:00:58.508195  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.508205  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:00:58.508213  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:00:58.508283  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:00:58.541262  129528 cri.go:89] found id: ""
	I1028 13:00:58.541299  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.541310  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:00:58.541320  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:00:58.541382  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:00:58.573007  129528 cri.go:89] found id: ""
	I1028 13:00:58.573036  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.573045  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:00:58.573051  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:00:58.573115  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:00:58.608924  129528 cri.go:89] found id: ""
	I1028 13:00:58.608956  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.608965  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:00:58.608972  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:00:58.609031  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:00:58.640767  129528 cri.go:89] found id: ""
	I1028 13:00:58.640803  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.640815  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:00:58.640833  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:00:58.640896  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:00:58.676919  129528 cri.go:89] found id: ""
	I1028 13:00:58.676946  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.676956  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:00:58.676963  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:00:58.677016  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:00:58.710403  129528 cri.go:89] found id: ""
	I1028 13:00:58.710433  129528 logs.go:282] 0 containers: []
	W1028 13:00:58.710442  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:00:58.710452  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:00:58.710464  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:00:58.776654  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:00:58.776681  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:00:58.776702  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:00:58.853547  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:00:58.853594  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:00:58.892003  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:00:58.892041  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:00:58.941464  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:00:58.941504  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:01:01.455328  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:01:01.467732  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:01:01.467811  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:01:01.500084  129528 cri.go:89] found id: ""
	I1028 13:01:01.500112  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.500122  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:01:01.500130  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:01:01.500206  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:01:01.539451  129528 cri.go:89] found id: ""
	I1028 13:01:01.539483  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.539502  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:01:01.539515  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:01:01.539571  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:01:01.569550  129528 cri.go:89] found id: ""
	I1028 13:01:01.569577  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.569585  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:01:01.569591  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:01:01.569643  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:01:01.602626  129528 cri.go:89] found id: ""
	I1028 13:01:01.602656  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.602668  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:01:01.602678  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:01:01.602742  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:01:01.637067  129528 cri.go:89] found id: ""
	I1028 13:01:01.637096  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.637104  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:01:01.637111  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:01:01.637172  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:01:01.676348  129528 cri.go:89] found id: ""
	I1028 13:01:01.676377  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.676384  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:01:01.676391  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:01:01.676443  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:01:01.719643  129528 cri.go:89] found id: ""
	I1028 13:01:01.719671  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.719679  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:01:01.719685  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:01:01.719746  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:01:01.753020  129528 cri.go:89] found id: ""
	I1028 13:01:01.753057  129528 logs.go:282] 0 containers: []
	W1028 13:01:01.753068  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:01:01.753080  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:01:01.753102  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:01:01.764745  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:01:01.764773  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:01:01.827928  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:01:01.827958  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:01:01.827975  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:01:01.907920  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:01:01.907956  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:01:01.957265  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:01:01.957294  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:01:04.509415  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:01:04.522053  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:01:04.522138  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:01:04.556266  129528 cri.go:89] found id: ""
	I1028 13:01:04.556300  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.556314  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:01:04.556324  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:01:04.556392  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:01:04.588506  129528 cri.go:89] found id: ""
	I1028 13:01:04.588559  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.588573  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:01:04.588583  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:01:04.588665  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:01:04.624452  129528 cri.go:89] found id: ""
	I1028 13:01:04.624481  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.624492  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:01:04.624500  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:01:04.624572  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:01:04.661333  129528 cri.go:89] found id: ""
	I1028 13:01:04.661365  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.661375  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:01:04.661383  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:01:04.661443  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:01:04.692616  129528 cri.go:89] found id: ""
	I1028 13:01:04.692638  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.692646  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:01:04.692652  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:01:04.692701  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:01:04.727133  129528 cri.go:89] found id: ""
	I1028 13:01:04.727175  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.727187  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:01:04.727196  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:01:04.727285  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:01:04.760265  129528 cri.go:89] found id: ""
	I1028 13:01:04.760298  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.760310  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:01:04.760319  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:01:04.760375  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:01:04.794381  129528 cri.go:89] found id: ""
	I1028 13:01:04.794414  129528 logs.go:282] 0 containers: []
	W1028 13:01:04.794426  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:01:04.794440  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:01:04.794456  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:01:04.848057  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:01:04.848095  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:01:04.860534  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:01:04.860562  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:01:04.922280  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:01:04.922309  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:01:04.922327  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:01:04.996145  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:01:04.996177  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:01:07.530930  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:01:07.544068  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:01:07.544143  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:01:07.580244  129528 cri.go:89] found id: ""
	I1028 13:01:07.580270  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.580281  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:01:07.580289  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:01:07.580377  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:01:07.613388  129528 cri.go:89] found id: ""
	I1028 13:01:07.613420  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.613432  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:01:07.613439  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:01:07.613506  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:01:07.647805  129528 cri.go:89] found id: ""
	I1028 13:01:07.647840  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.647853  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:01:07.647861  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:01:07.647938  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:01:07.712044  129528 cri.go:89] found id: ""
	I1028 13:01:07.712081  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.712095  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:01:07.712105  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:01:07.712188  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:01:07.749129  129528 cri.go:89] found id: ""
	I1028 13:01:07.749157  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.749166  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:01:07.749173  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:01:07.749225  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:01:07.780225  129528 cri.go:89] found id: ""
	I1028 13:01:07.780257  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.780267  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:01:07.780274  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:01:07.780333  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:01:07.810235  129528 cri.go:89] found id: ""
	I1028 13:01:07.810265  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.810277  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:01:07.810285  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:01:07.810337  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:01:07.841067  129528 cri.go:89] found id: ""
	I1028 13:01:07.841099  129528 logs.go:282] 0 containers: []
	W1028 13:01:07.841110  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:01:07.841121  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:01:07.841137  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:01:07.887643  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:01:07.887681  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:01:07.901903  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:01:07.901934  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:01:07.964494  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:01:07.964525  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:01:07.964541  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:01:08.042863  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:01:08.042905  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:01:10.580994  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:01:10.593587  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:01:10.593654  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:01:10.625027  129528 cri.go:89] found id: ""
	I1028 13:01:10.625058  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.625067  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:01:10.625075  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:01:10.625144  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:01:10.658806  129528 cri.go:89] found id: ""
	I1028 13:01:10.658844  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.658855  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:01:10.658863  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:01:10.658927  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:01:10.692928  129528 cri.go:89] found id: ""
	I1028 13:01:10.692957  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.692965  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:01:10.692971  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:01:10.693017  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:01:10.722906  129528 cri.go:89] found id: ""
	I1028 13:01:10.722940  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.722952  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:01:10.722961  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:01:10.723031  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:01:10.752628  129528 cri.go:89] found id: ""
	I1028 13:01:10.752655  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.752663  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:01:10.752669  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:01:10.752732  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:01:10.781994  129528 cri.go:89] found id: ""
	I1028 13:01:10.782018  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.782026  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:01:10.782033  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:01:10.782098  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:01:10.811261  129528 cri.go:89] found id: ""
	I1028 13:01:10.811288  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.811296  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:01:10.811301  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:01:10.811361  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:01:10.841899  129528 cri.go:89] found id: ""
	I1028 13:01:10.841926  129528 logs.go:282] 0 containers: []
	W1028 13:01:10.841935  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:01:10.841946  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:01:10.841961  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:01:10.854476  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:01:10.854507  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:01:10.920612  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:01:10.920644  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:01:10.920661  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:01:10.993886  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:01:10.993923  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:01:11.027602  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:01:11.027655  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:01:13.576115  129528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:01:13.588185  129528 kubeadm.go:597] duration metric: took 4m3.353591667s to restartPrimaryControlPlane
	W1028 13:01:13.588257  129528 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 13:01:13.588287  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 13:01:14.043538  129528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:01:14.058037  129528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:01:14.066505  129528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:01:14.076438  129528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:01:14.076458  129528 kubeadm.go:157] found existing configuration files:
	
	I1028 13:01:14.076500  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 13:01:14.085183  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:01:14.085248  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:01:14.093451  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 13:01:14.102768  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:01:14.102837  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:01:14.111337  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 13:01:14.119172  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:01:14.119217  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:01:14.128516  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 13:01:14.137478  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:01:14.137553  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:01:14.147153  129528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 13:01:14.214272  129528 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 13:01:14.214378  129528 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:01:14.359604  129528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:01:14.359798  129528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:01:14.359929  129528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 13:01:14.540488  129528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:01:14.542548  129528 out.go:235]   - Generating certificates and keys ...
	I1028 13:01:14.542661  129528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:01:14.542717  129528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:01:14.542802  129528 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 13:01:14.542885  129528 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 13:01:14.543005  129528 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 13:01:14.543095  129528 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 13:01:14.543194  129528 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 13:01:14.543395  129528 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 13:01:14.543852  129528 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 13:01:14.544453  129528 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 13:01:14.544610  129528 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 13:01:14.544662  129528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:01:14.661655  129528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:01:14.889041  129528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:01:15.047618  129528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:01:15.271836  129528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:01:15.290498  129528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:01:15.291498  129528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:01:15.291568  129528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:01:15.417885  129528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:01:15.419611  129528 out.go:235]   - Booting up control plane ...
	I1028 13:01:15.419719  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:01:15.432737  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:01:15.434181  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:01:15.434959  129528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:01:15.437212  129528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 13:01:55.437157  129528 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 13:01:55.437285  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:01:55.437557  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:02:00.437406  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:02:00.437644  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:02:10.437931  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:02:10.438194  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:02:30.438536  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:02:30.438789  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:03:10.440674  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:03:10.440894  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:03:10.440907  129528 kubeadm.go:310] 
	I1028 13:03:10.440964  129528 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 13:03:10.441027  129528 kubeadm.go:310] 		timed out waiting for the condition
	I1028 13:03:10.441054  129528 kubeadm.go:310] 
	I1028 13:03:10.441099  129528 kubeadm.go:310] 	This error is likely caused by:
	I1028 13:03:10.441150  129528 kubeadm.go:310] 		- The kubelet is not running
	I1028 13:03:10.441269  129528 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 13:03:10.441279  129528 kubeadm.go:310] 
	I1028 13:03:10.441395  129528 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 13:03:10.441444  129528 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 13:03:10.441491  129528 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 13:03:10.441501  129528 kubeadm.go:310] 
	I1028 13:03:10.441617  129528 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 13:03:10.441742  129528 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 13:03:10.441752  129528 kubeadm.go:310] 
	I1028 13:03:10.441872  129528 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 13:03:10.441954  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 13:03:10.442017  129528 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 13:03:10.442092  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 13:03:10.442109  129528 kubeadm.go:310] 
	I1028 13:03:10.442601  129528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 13:03:10.442716  129528 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 13:03:10.442824  129528 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 13:03:10.442955  129528 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 13:03:10.443007  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 13:03:10.929962  129528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:03:10.944553  129528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:03:10.954063  129528 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:03:10.954082  129528 kubeadm.go:157] found existing configuration files:
	
	I1028 13:03:10.954119  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 13:03:10.962685  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:03:10.962742  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:03:10.971724  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 13:03:10.979992  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:03:10.980057  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:03:10.988758  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 13:03:10.996975  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:03:10.997029  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:03:11.005505  129528 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 13:03:11.013625  129528 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:03:11.013672  129528 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:03:11.022222  129528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 13:03:11.091064  129528 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 13:03:11.091194  129528 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:03:11.229543  129528 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:03:11.229707  129528 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:03:11.229888  129528 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 13:03:11.404931  129528 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:03:11.407091  129528 out.go:235]   - Generating certificates and keys ...
	I1028 13:03:11.407207  129528 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:03:11.407297  129528 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:03:11.407425  129528 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 13:03:11.407509  129528 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 13:03:11.407617  129528 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 13:03:11.407735  129528 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 13:03:11.407791  129528 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 13:03:11.408070  129528 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 13:03:11.408736  129528 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 13:03:11.409292  129528 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 13:03:11.409363  129528 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 13:03:11.409480  129528 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:03:11.493637  129528 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:03:11.629605  129528 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:03:11.740644  129528 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:03:11.850011  129528 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:03:11.872078  129528 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:03:11.875611  129528 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:03:11.875726  129528 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:03:12.015559  129528 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:03:12.017420  129528 out.go:235]   - Booting up control plane ...
	I1028 13:03:12.017543  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:03:12.019019  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:03:12.020455  129528 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:03:12.021568  129528 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:03:12.024247  129528 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 13:03:52.026670  129528 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 13:03:52.026967  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:03:52.027138  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:03:57.027651  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:03:57.027898  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:04:07.028329  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:04:07.028595  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:04:27.029748  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:04:27.029934  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:05:07.030909  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:05:07.031181  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:05:07.031208  129528 kubeadm.go:310] 
	I1028 13:05:07.031253  129528 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 13:05:07.031289  129528 kubeadm.go:310] 		timed out waiting for the condition
	I1028 13:05:07.031295  129528 kubeadm.go:310] 
	I1028 13:05:07.031330  129528 kubeadm.go:310] 	This error is likely caused by:
	I1028 13:05:07.031360  129528 kubeadm.go:310] 		- The kubelet is not running
	I1028 13:05:07.031450  129528 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 13:05:07.031457  129528 kubeadm.go:310] 
	I1028 13:05:07.031575  129528 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 13:05:07.031670  129528 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 13:05:07.031738  129528 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 13:05:07.031763  129528 kubeadm.go:310] 
	I1028 13:05:07.031864  129528 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 13:05:07.031941  129528 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 13:05:07.031950  129528 kubeadm.go:310] 
	I1028 13:05:07.032051  129528 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 13:05:07.032184  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 13:05:07.032272  129528 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 13:05:07.032338  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 13:05:07.032347  129528 kubeadm.go:310] 
	I1028 13:05:07.033167  129528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 13:05:07.033279  129528 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 13:05:07.033394  129528 kubeadm.go:394] duration metric: took 7m56.845914043s to StartCluster
	I1028 13:05:07.033414  129528 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 13:05:07.033442  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:05:07.033498  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:05:07.077299  129528 cri.go:89] found id: ""
	I1028 13:05:07.077333  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.077348  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:05:07.077357  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:05:07.077515  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:05:07.115216  129528 cri.go:89] found id: ""
	I1028 13:05:07.115253  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.115264  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:05:07.115273  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:05:07.115350  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:05:07.152335  129528 cri.go:89] found id: ""
	I1028 13:05:07.152366  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.152375  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:05:07.152385  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:05:07.152455  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:05:07.204903  129528 cri.go:89] found id: ""
	I1028 13:05:07.204937  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.204948  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:05:07.204957  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:05:07.205028  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:05:07.248190  129528 cri.go:89] found id: ""
	I1028 13:05:07.248227  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.248239  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:05:07.248248  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:05:07.248316  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:05:07.278451  129528 cri.go:89] found id: ""
	I1028 13:05:07.278476  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.278484  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:05:07.278491  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:05:07.278541  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:05:07.307661  129528 cri.go:89] found id: ""
	I1028 13:05:07.307691  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.307701  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:05:07.307710  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:05:07.307777  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:05:07.337065  129528 cri.go:89] found id: ""
	I1028 13:05:07.337090  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.337098  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:05:07.337109  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:05:07.337123  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:05:07.388324  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:05:07.388358  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:05:07.400899  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:05:07.400930  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:05:07.468773  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:05:07.468802  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:05:07.468831  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:05:07.574416  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:05:07.574456  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 13:05:07.609860  129528 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 13:05:07.609918  129528 out.go:270] * 
	* 
	W1028 13:05:07.609978  129528 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 13:05:07.609991  129528 out.go:270] * 
	* 
	W1028 13:05:07.610845  129528 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 13:05:07.613756  129528 out.go:201] 
	W1028 13:05:07.614834  129528 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 13:05:07.614875  129528 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 13:05:07.614895  129528 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 13:05:07.616217  129528 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-733464 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (228.549461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-733464 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-232896                              | stopped-upgrade-232896       | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:48 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-232896                              | stopped-upgrade-232896       | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:47 UTC |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:47 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:04:46
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:04:46.660479  132981 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:04:46.660591  132981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:04:46.660601  132981 out.go:358] Setting ErrFile to fd 2...
	I1028 13:04:46.660605  132981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:04:46.660771  132981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:04:46.661345  132981 out.go:352] Setting JSON to false
	I1028 13:04:46.662418  132981 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10037,"bootTime":1730110650,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:04:46.662516  132981 start.go:139] virtualization: kvm guest
	I1028 13:04:46.664462  132981 out.go:177] * [default-k8s-diff-port-783661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:04:46.666055  132981 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:04:46.666061  132981 notify.go:220] Checking for updates...
	I1028 13:04:46.667527  132981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:04:46.668912  132981 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:04:46.670243  132981 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:04:46.671441  132981 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:04:46.672732  132981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:04:46.674557  132981 config.go:182] Loaded profile config "embed-certs-818470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:04:46.674706  132981 config.go:182] Loaded profile config "no-preload-702694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:04:46.674852  132981 config.go:182] Loaded profile config "old-k8s-version-733464": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 13:04:46.674959  132981 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:04:46.710673  132981 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 13:04:46.711960  132981 start.go:297] selected driver: kvm2
	I1028 13:04:46.711980  132981 start.go:901] validating driver "kvm2" against <nil>
	I1028 13:04:46.711993  132981 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:04:46.712705  132981 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:04:46.712795  132981 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:04:46.730451  132981 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:04:46.730504  132981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 13:04:46.730765  132981 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:04:46.730799  132981 cni.go:84] Creating CNI manager for ""
	I1028 13:04:46.730873  132981 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:04:46.730885  132981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 13:04:46.730948  132981 start.go:340] cluster config:
	{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:04:46.731071  132981 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:04:46.732732  132981 out.go:177] * Starting "default-k8s-diff-port-783661" primary control-plane node in "default-k8s-diff-port-783661" cluster
	I1028 13:04:46.733904  132981 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:04:46.733943  132981 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:04:46.733956  132981 cache.go:56] Caching tarball of preloaded images
	I1028 13:04:46.734060  132981 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:04:46.734075  132981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:04:46.734158  132981 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:04:46.734177  132981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json: {Name:mk8285c7f406db4894de26d17c87234cc31bc779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:04:46.734335  132981 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:04:46.734379  132981 start.go:364] duration metric: took 22.723µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:04:46.734405  132981 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:04:46.734473  132981 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 13:04:46.735889  132981 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 13:04:46.736020  132981 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:04:46.736066  132981 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:04:46.751126  132981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I1028 13:04:46.751594  132981 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:04:46.752294  132981 main.go:141] libmachine: Using API Version  1
	I1028 13:04:46.752317  132981 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:04:46.752676  132981 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:04:46.752876  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:04:46.753067  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:04:46.753246  132981 start.go:159] libmachine.API.Create for "default-k8s-diff-port-783661" (driver="kvm2")
	I1028 13:04:46.753280  132981 client.go:168] LocalClient.Create starting
	I1028 13:04:46.753323  132981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 13:04:46.753364  132981 main.go:141] libmachine: Decoding PEM data...
	I1028 13:04:46.753387  132981 main.go:141] libmachine: Parsing certificate...
	I1028 13:04:46.753453  132981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 13:04:46.753480  132981 main.go:141] libmachine: Decoding PEM data...
	I1028 13:04:46.753495  132981 main.go:141] libmachine: Parsing certificate...
	I1028 13:04:46.753518  132981 main.go:141] libmachine: Running pre-create checks...
	I1028 13:04:46.753528  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .PreCreateCheck
	I1028 13:04:46.753898  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetConfigRaw
	I1028 13:04:46.754375  132981 main.go:141] libmachine: Creating machine...
	I1028 13:04:46.754393  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Create
	I1028 13:04:46.754532  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Creating KVM machine...
	I1028 13:04:46.755739  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found existing default KVM network
	I1028 13:04:46.756871  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:46.756731  133004 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:38:47} reservation:<nil>}
	I1028 13:04:46.757731  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:46.757667  133004 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:cf:a6} reservation:<nil>}
	I1028 13:04:46.758843  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:46.758758  133004 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039a780}
	I1028 13:04:46.758859  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | created network xml: 
	I1028 13:04:46.758867  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | <network>
	I1028 13:04:46.758878  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   <name>mk-default-k8s-diff-port-783661</name>
	I1028 13:04:46.758886  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   <dns enable='no'/>
	I1028 13:04:46.758890  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   
	I1028 13:04:46.758899  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1028 13:04:46.758904  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |     <dhcp>
	I1028 13:04:46.758911  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1028 13:04:46.758926  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |     </dhcp>
	I1028 13:04:46.758935  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   </ip>
	I1028 13:04:46.758940  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG |   
	I1028 13:04:46.758949  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | </network>
	I1028 13:04:46.758957  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | 
	I1028 13:04:46.763770  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | trying to create private KVM network mk-default-k8s-diff-port-783661 192.168.61.0/24...
	I1028 13:04:46.830326  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | private KVM network mk-default-k8s-diff-port-783661 192.168.61.0/24 created
	I1028 13:04:46.830393  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661 ...
	I1028 13:04:46.830427  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:46.830304  133004 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:04:46.830448  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 13:04:46.830480  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 13:04:47.111677  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:47.111497  133004 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa...
	I1028 13:04:47.244622  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:47.244465  133004 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/default-k8s-diff-port-783661.rawdisk...
	I1028 13:04:47.244659  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Writing magic tar header
	I1028 13:04:47.244676  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Writing SSH key tar header
	I1028 13:04:47.244685  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:47.244588  133004 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661 ...
	I1028 13:04:47.244699  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661
	I1028 13:04:47.244762  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661 (perms=drwx------)
	I1028 13:04:47.244789  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 13:04:47.244799  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 13:04:47.244823  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:04:47.244835  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 13:04:47.244851  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 13:04:47.244862  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home/jenkins
	I1028 13:04:47.244874  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 13:04:47.244885  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Checking permissions on dir: /home
	I1028 13:04:47.244896  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 13:04:47.244907  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Skipping /home - not owner
	I1028 13:04:47.244920  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 13:04:47.244932  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 13:04:47.244945  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Creating domain...
	I1028 13:04:47.246129  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) define libvirt domain using xml: 
	I1028 13:04:47.246161  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) <domain type='kvm'>
	I1028 13:04:47.246186  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <name>default-k8s-diff-port-783661</name>
	I1028 13:04:47.246203  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <memory unit='MiB'>2200</memory>
	I1028 13:04:47.246240  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <vcpu>2</vcpu>
	I1028 13:04:47.246264  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <features>
	I1028 13:04:47.246281  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <acpi/>
	I1028 13:04:47.246292  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <apic/>
	I1028 13:04:47.246301  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <pae/>
	I1028 13:04:47.246309  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     
	I1028 13:04:47.246318  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   </features>
	I1028 13:04:47.246330  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <cpu mode='host-passthrough'>
	I1028 13:04:47.246339  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   
	I1028 13:04:47.246350  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   </cpu>
	I1028 13:04:47.246363  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <os>
	I1028 13:04:47.246378  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <type>hvm</type>
	I1028 13:04:47.246392  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <boot dev='cdrom'/>
	I1028 13:04:47.246400  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <boot dev='hd'/>
	I1028 13:04:47.246414  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <bootmenu enable='no'/>
	I1028 13:04:47.246431  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   </os>
	I1028 13:04:47.246452  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   <devices>
	I1028 13:04:47.246465  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <disk type='file' device='cdrom'>
	I1028 13:04:47.246491  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/boot2docker.iso'/>
	I1028 13:04:47.246509  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <target dev='hdc' bus='scsi'/>
	I1028 13:04:47.246542  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <readonly/>
	I1028 13:04:47.246556  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </disk>
	I1028 13:04:47.246566  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <disk type='file' device='disk'>
	I1028 13:04:47.246576  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 13:04:47.246606  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/default-k8s-diff-port-783661.rawdisk'/>
	I1028 13:04:47.246640  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <target dev='hda' bus='virtio'/>
	I1028 13:04:47.246659  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </disk>
	I1028 13:04:47.246677  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <interface type='network'>
	I1028 13:04:47.246695  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <source network='mk-default-k8s-diff-port-783661'/>
	I1028 13:04:47.246714  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <model type='virtio'/>
	I1028 13:04:47.246727  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </interface>
	I1028 13:04:47.246738  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <interface type='network'>
	I1028 13:04:47.246750  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <source network='default'/>
	I1028 13:04:47.246764  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <model type='virtio'/>
	I1028 13:04:47.246776  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </interface>
	I1028 13:04:47.246786  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <serial type='pty'>
	I1028 13:04:47.246797  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <target port='0'/>
	I1028 13:04:47.246808  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </serial>
	I1028 13:04:47.246819  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <console type='pty'>
	I1028 13:04:47.246832  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <target type='serial' port='0'/>
	I1028 13:04:47.246852  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </console>
	I1028 13:04:47.246872  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     <rng model='virtio'>
	I1028 13:04:47.246882  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)       <backend model='random'>/dev/random</backend>
	I1028 13:04:47.246896  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     </rng>
	I1028 13:04:47.246907  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     
	I1028 13:04:47.246917  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)     
	I1028 13:04:47.246925  132981 main.go:141] libmachine: (default-k8s-diff-port-783661)   </devices>
	I1028 13:04:47.246935  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) </domain>
	I1028 13:04:47.246957  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) 
	I1028 13:04:47.251921  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:51:5c:5c in network default
	I1028 13:04:47.252491  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:47.252514  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring networks are active...
	I1028 13:04:47.253177  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network default is active
	I1028 13:04:47.253468  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network mk-default-k8s-diff-port-783661 is active
	I1028 13:04:47.253995  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Getting domain xml...
	I1028 13:04:47.254661  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Creating domain...
	I1028 13:04:48.490928  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting to get IP...
	I1028 13:04:48.491972  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:48.492525  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:48.492592  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:48.492482  133004 retry.go:31] will retry after 202.318ms: waiting for machine to come up
	I1028 13:04:48.696867  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:48.697437  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:48.697465  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:48.697379  133004 retry.go:31] will retry after 385.162523ms: waiting for machine to come up
	I1028 13:04:49.083693  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:49.084292  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:49.084324  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:49.084222  133004 retry.go:31] will retry after 327.963725ms: waiting for machine to come up
	I1028 13:04:49.413657  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:49.414231  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:49.414280  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:49.414165  133004 retry.go:31] will retry after 595.438778ms: waiting for machine to come up
	I1028 13:04:50.010924  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:50.011400  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:50.011427  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:50.011362  133004 retry.go:31] will retry after 581.68877ms: waiting for machine to come up
	I1028 13:04:50.595284  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:50.595864  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:50.595895  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:50.595814  133004 retry.go:31] will retry after 908.559192ms: waiting for machine to come up
	I1028 13:04:51.506117  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:51.506682  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:51.506710  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:51.506633  133004 retry.go:31] will retry after 901.385641ms: waiting for machine to come up
	I1028 13:04:52.409895  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:52.410410  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:52.410439  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:52.410367  133004 retry.go:31] will retry after 1.216239679s: waiting for machine to come up
	I1028 13:04:53.628848  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:53.629357  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:53.629387  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:53.629300  133004 retry.go:31] will retry after 1.644108962s: waiting for machine to come up
	I1028 13:04:55.274639  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:55.275083  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:55.275114  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:55.275029  133004 retry.go:31] will retry after 1.508125861s: waiting for machine to come up
	I1028 13:04:56.784605  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:56.785157  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:56.785209  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:56.785102  133004 retry.go:31] will retry after 2.442143769s: waiting for machine to come up
	I1028 13:04:59.228674  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:04:59.229192  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:04:59.229216  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:04:59.229142  133004 retry.go:31] will retry after 3.28932871s: waiting for machine to come up
	I1028 13:05:02.519912  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:05:02.520361  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:05:02.520385  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:05:02.520306  133004 retry.go:31] will retry after 2.791025117s: waiting for machine to come up
	I1028 13:05:05.314269  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:05:05.314780  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:05:05.314814  132981 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:05:05.314718  133004 retry.go:31] will retry after 4.696479068s: waiting for machine to come up
	I1028 13:05:07.030909  129528 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 13:05:07.031181  129528 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 13:05:07.031208  129528 kubeadm.go:310] 
	I1028 13:05:07.031253  129528 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 13:05:07.031289  129528 kubeadm.go:310] 		timed out waiting for the condition
	I1028 13:05:07.031295  129528 kubeadm.go:310] 
	I1028 13:05:07.031330  129528 kubeadm.go:310] 	This error is likely caused by:
	I1028 13:05:07.031360  129528 kubeadm.go:310] 		- The kubelet is not running
	I1028 13:05:07.031450  129528 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 13:05:07.031457  129528 kubeadm.go:310] 
	I1028 13:05:07.031575  129528 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 13:05:07.031670  129528 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 13:05:07.031738  129528 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 13:05:07.031763  129528 kubeadm.go:310] 
	I1028 13:05:07.031864  129528 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 13:05:07.031941  129528 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 13:05:07.031950  129528 kubeadm.go:310] 
	I1028 13:05:07.032051  129528 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 13:05:07.032184  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 13:05:07.032272  129528 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 13:05:07.032338  129528 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 13:05:07.032347  129528 kubeadm.go:310] 
	I1028 13:05:07.033167  129528 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 13:05:07.033279  129528 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 13:05:07.033394  129528 kubeadm.go:394] duration metric: took 7m56.845914043s to StartCluster
	I1028 13:05:07.033414  129528 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 13:05:07.033442  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:05:07.033498  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:05:07.077299  129528 cri.go:89] found id: ""
	I1028 13:05:07.077333  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.077348  129528 logs.go:284] No container was found matching "kube-apiserver"
	I1028 13:05:07.077357  129528 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:05:07.077515  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:05:07.115216  129528 cri.go:89] found id: ""
	I1028 13:05:07.115253  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.115264  129528 logs.go:284] No container was found matching "etcd"
	I1028 13:05:07.115273  129528 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:05:07.115350  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:05:07.152335  129528 cri.go:89] found id: ""
	I1028 13:05:07.152366  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.152375  129528 logs.go:284] No container was found matching "coredns"
	I1028 13:05:07.152385  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:05:07.152455  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:05:07.204903  129528 cri.go:89] found id: ""
	I1028 13:05:07.204937  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.204948  129528 logs.go:284] No container was found matching "kube-scheduler"
	I1028 13:05:07.204957  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:05:07.205028  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:05:07.248190  129528 cri.go:89] found id: ""
	I1028 13:05:07.248227  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.248239  129528 logs.go:284] No container was found matching "kube-proxy"
	I1028 13:05:07.248248  129528 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:05:07.248316  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:05:07.278451  129528 cri.go:89] found id: ""
	I1028 13:05:07.278476  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.278484  129528 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 13:05:07.278491  129528 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:05:07.278541  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:05:07.307661  129528 cri.go:89] found id: ""
	I1028 13:05:07.307691  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.307701  129528 logs.go:284] No container was found matching "kindnet"
	I1028 13:05:07.307710  129528 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 13:05:07.307777  129528 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 13:05:07.337065  129528 cri.go:89] found id: ""
	I1028 13:05:07.337090  129528 logs.go:282] 0 containers: []
	W1028 13:05:07.337098  129528 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 13:05:07.337109  129528 logs.go:123] Gathering logs for kubelet ...
	I1028 13:05:07.337123  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:05:07.388324  129528 logs.go:123] Gathering logs for dmesg ...
	I1028 13:05:07.388358  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:05:07.400899  129528 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:05:07.400930  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 13:05:07.468773  129528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 13:05:07.468802  129528 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:05:07.468831  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:05:07.574416  129528 logs.go:123] Gathering logs for container status ...
	I1028 13:05:07.574456  129528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 13:05:07.609860  129528 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 13:05:07.609918  129528 out.go:270] * 
	W1028 13:05:07.609978  129528 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 13:05:07.609991  129528 out.go:270] * 
	W1028 13:05:07.610845  129528 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 13:05:07.613756  129528 out.go:201] 
	W1028 13:05:07.614834  129528 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 13:05:07.614875  129528 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 13:05:07.614895  129528 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 13:05:07.616217  129528 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.526338838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120708526305239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5e72eca-c71f-4166-a63b-25e245011057 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.526970802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a7a3b92-98bf-405e-b31b-186b761cbd09 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.527019421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a7a3b92-98bf-405e-b31b-186b761cbd09 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.527059518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a7a3b92-98bf-405e-b31b-186b761cbd09 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.554314711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d84ae314-199c-4558-be47-3cfd1adb716a name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.554397778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d84ae314-199c-4558-be47-3cfd1adb716a name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.555183000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91645367-97eb-4d30-9c3d-eeb509098b01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.555578112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120708555556068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91645367-97eb-4d30-9c3d-eeb509098b01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.556018537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46f33547-e00e-408b-a96e-e8bf75fa90f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.556062791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46f33547-e00e-408b-a96e-e8bf75fa90f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.556092360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=46f33547-e00e-408b-a96e-e8bf75fa90f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.585321193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=142a5fae-f150-4959-9d10-8a34e4e237da name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.585399366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=142a5fae-f150-4959-9d10-8a34e4e237da name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.586329437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e82349d-7d0e-449b-95aa-2647b983a4ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.586727037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120708586706977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e82349d-7d0e-449b-95aa-2647b983a4ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.587302571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b08b51a-b6a0-414a-9332-57f8516b0f8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.587365743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b08b51a-b6a0-414a-9332-57f8516b0f8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.587403385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6b08b51a-b6a0-414a-9332-57f8516b0f8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.617559243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6b0cecc-a285-4458-9ac6-ec1288d01480 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.617646199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6b0cecc-a285-4458-9ac6-ec1288d01480 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.618744393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a72196a8-8b35-4cb7-b5fc-231e084b239a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.619143943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120708619124218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a72196a8-8b35-4cb7-b5fc-231e084b239a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.619679642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6fc9608-a287-41c9-98e1-987e4d574087 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.619742615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6fc9608-a287-41c9-98e1-987e4d574087 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:05:08 old-k8s-version-733464 crio[631]: time="2024-10-28 13:05:08.619813876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d6fc9608-a287-41c9-98e1-987e4d574087 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053749] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037595] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.915680] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.519083] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 12:57] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.070642] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061498] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.188572] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.147125] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361094] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069839] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.013009] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.125884] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 13:01] systemd-fstab-generator[5140]: Ignoring "noauto" option for root device
	[Oct28 13:03] systemd-fstab-generator[5420]: Ignoring "noauto" option for root device
	[  +0.055820] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:05:08 up 8 min,  0 users,  load average: 0.03, 0.11, 0.07
	Linux old-k8s-version-733464 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001c1260, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000ac0600, 0x24, 0x0, ...)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: net.(*Dialer).DialContext(0xc00009f800, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ac0600, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008fb880, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ac0600, 0x24, 0x60, 0x7f43081ddf60, 0x118, ...)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: net/http.(*Transport).dial(0xc0005d9680, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ac0600, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: net/http.(*Transport).dialConn(0xc0005d9680, 0x4f7fe00, 0xc000120018, 0x0, 0xc0000d9e00, 0x5, 0xc000ac0600, 0x24, 0x0, 0xc000abeb40, ...)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: net/http.(*Transport).dialConnFor(0xc0005d9680, 0xc000abc420)
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]: created by net/http.(*Transport).queueForDial
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5600]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 28 13:05:07 old-k8s-version-733464 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 13:05:07 old-k8s-version-733464 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 13:05:07 old-k8s-version-733464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 28 13:05:07 old-k8s-version-733464 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 13:05:07 old-k8s-version-733464 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5666]: I1028 13:05:07.946004    5666 server.go:416] Version: v1.20.0
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5666]: I1028 13:05:07.946354    5666 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5666]: I1028 13:05:07.948622    5666 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5666]: I1028 13:05:07.949843    5666 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 28 13:05:07 old-k8s-version-733464 kubelet[5666]: W1028 13:05:07.950038    5666 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (215.676344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-733464" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (761.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 13:02:13.449868   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-702694 -n no-preload-702694
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:10:32.835997554 +0000 UTC m=+5614.401568994
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-702694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-702694 logs -n 25: (1.066497247s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:08:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:08:22.743907  134197 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:08:22.744028  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744040  134197 out.go:358] Setting ErrFile to fd 2...
	I1028 13:08:22.744047  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744230  134197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:08:22.744750  134197 out.go:352] Setting JSON to false
	I1028 13:08:22.745654  134197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10253,"bootTime":1730110650,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:08:22.745744  134197 start.go:139] virtualization: kvm guest
	I1028 13:08:22.747939  134197 out.go:177] * [default-k8s-diff-port-783661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:08:22.749403  134197 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:08:22.749457  134197 notify.go:220] Checking for updates...
	I1028 13:08:22.751796  134197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:08:22.753005  134197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:08:22.754141  134197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:08:22.755335  134197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:08:22.756546  134197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:08:22.758122  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:08:22.758528  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.758586  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.773341  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I1028 13:08:22.773804  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.774488  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.774519  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.774851  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.775031  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.775267  134197 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:08:22.775558  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.775601  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.789667  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I1028 13:08:22.790111  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.790632  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.790659  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.791008  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.791222  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.825579  134197 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 13:08:22.826616  134197 start.go:297] selected driver: kvm2
	I1028 13:08:22.826631  134197 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.826749  134197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:08:22.827454  134197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.827533  134197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:08:22.841833  134197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:08:22.842206  134197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:08:22.842238  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:08:22.842287  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:08:22.842319  134197 start.go:340] cluster config:
	{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.842425  134197 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.844980  134197 out.go:177] * Starting "default-k8s-diff-port-783661" primary control-plane node in "default-k8s-diff-port-783661" cluster
	I1028 13:08:22.846171  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:08:22.846203  134197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:08:22.846210  134197 cache.go:56] Caching tarball of preloaded images
	I1028 13:08:22.846302  134197 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:08:22.846315  134197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:08:22.846407  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:08:22.846587  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:08:22.846633  134197 start.go:364] duration metric: took 26.842µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:08:22.846652  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:08:22.846661  134197 fix.go:54] fixHost starting: 
	I1028 13:08:22.846932  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.846968  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.860395  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1028 13:08:22.860752  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.861207  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.861239  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.861578  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.861740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.861874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:08:22.863378  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Running err=<nil>
	W1028 13:08:22.863410  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:08:22.865166  134197 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-783661" VM ...
	I1028 13:08:22.866336  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:08:22.866355  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.866529  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:08:22.869364  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.869837  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:05:00 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:08:22.869861  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.870068  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:08:22.870245  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870416  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870528  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:08:22.870703  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:08:22.870930  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:08:22.870946  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:08:25.759930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:28.831940  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:34.911959  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:37.983844  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:44.063898  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:47.135931  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:56.256018  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:59.327922  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:05.407915  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:08.479971  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:14.559886  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:17.635930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:23.711861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:26.783972  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:32.863862  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:35.935864  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:42.015884  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:45.091903  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:51.167873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:54.239919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:00.319846  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:03.391949  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:09.471853  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:12.543958  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:18.623893  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:21.695970  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:27.775910  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:30.851880  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	
	
	==> CRI-O <==
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.425233362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121033425209274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10cc7627-b606-4e6f-a3d1-57d9172e5718 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.425827042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb579516-d4a6-4d8a-847d-e080e8ac86ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.425880660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb579516-d4a6-4d8a-847d-e080e8ac86ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.426072271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb579516-d4a6-4d8a-847d-e080e8ac86ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.460764683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a02fc8fe-1d90-4a58-b3a8-f7d8aa1e236e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.460836957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a02fc8fe-1d90-4a58-b3a8-f7d8aa1e236e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.461937469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=699d0c62-9b02-42fe-a3e4-35946ec0d868 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.462288224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121033462258019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=699d0c62-9b02-42fe-a3e4-35946ec0d868 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.463009305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94a4f143-e085-43b0-b83c-b8e8da022a9c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.463081381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94a4f143-e085-43b0-b83c-b8e8da022a9c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.464381465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94a4f143-e085-43b0-b83c-b8e8da022a9c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.503640465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21e93a89-d132-452a-837c-cac8d747a696 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.503720104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21e93a89-d132-452a-837c-cac8d747a696 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.504723892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5261c84b-b66a-42f1-97d8-2c8b5fba0d8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.505034181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121033505013986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5261c84b-b66a-42f1-97d8-2c8b5fba0d8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.505617682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1975c19f-2f16-4a0d-ab28-ac2c30331597 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.505668518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1975c19f-2f16-4a0d-ab28-ac2c30331597 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.505851595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1975c19f-2f16-4a0d-ab28-ac2c30331597 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.536102139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b20b1bd-0493-4edf-a8da-6d01ad2b7f6a name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.536166543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b20b1bd-0493-4edf-a8da-6d01ad2b7f6a name=/runtime.v1.RuntimeService/Version
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.537190434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=277c1552-58a8-42f0-b42d-8e5863d3c0cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.537584286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121033537560745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=277c1552-58a8-42f0-b42d-8e5863d3c0cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.538149240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=367dbf7e-be33-4f3b-8057-5e783748632a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.538221062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=367dbf7e-be33-4f3b-8057-5e783748632a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:10:33 no-preload-702694 crio[705]: time="2024-10-28 13:10:33.538407128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=367dbf7e-be33-4f3b-8057-5e783748632a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4540b20ac011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2b12c00d44d60       storage-provisioner
	fc02ea7494c68       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   28cddba48cd1f       busybox
	b1cc4ab88fe8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   3a7ae35ca1eb4       coredns-7c65d6cfc9-ztw6s
	cf0d300afa265       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2b12c00d44d60       storage-provisioner
	f23435879fbc7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   c844f99dd5f36       kube-proxy-ws2ns
	b597acbba8b05       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   30c481cce3d2a       etcd-no-preload-702694
	031a54940b19d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   d2477c2476eb0       kube-scheduler-no-preload-702694
	913422942e8f4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   3937c011b5fd3       kube-apiserver-no-preload-702694
	b4ee30ee5800b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   652e01b20a357       kube-controller-manager-no-preload-702694
	
	
	==> coredns [b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43065 - 18476 "HINFO IN 5657511228394046735.7229522385264883326. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049111649s
	
	
	==> describe nodes <==
	Name:               no-preload-702694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-702694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=no-preload-702694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_48_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:48:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-702694
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:07:48 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:07:48 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:07:48 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:07:48 +0000   Mon, 28 Oct 2024 12:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.192
	  Hostname:    no-preload-702694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5dc07cb00a34b8a9d518d396a7c1405
	  System UUID:                f5dc07cb-00a3-4b8a-9d51-8d396a7c1405
	  Boot ID:                    004ac86a-5cea-4f2c-bfdb-1d8a65990f6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-ztw6s                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-702694                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-702694             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-702694    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-ws2ns                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-702694             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-wxm6t              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-702694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-702694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-702694 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-702694 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-702694 event: Registered Node no-preload-702694 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-702694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-702694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-702694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-702694 event: Registered Node no-preload-702694 in Controller
	
	
	==> dmesg <==
	[Oct28 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050414] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036716] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.767675] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.926688] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.511740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.143412] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.061050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050546] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.198820] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.122006] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.255246] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +14.866848] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.059266] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.587341] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[Oct28 12:57] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.975654] systemd-fstab-generator[2063]: Ignoring "noauto" option for root device
	[  +3.718250] kauditd_printk_skb: 58 callbacks suppressed
	[ +25.181303] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a] <==
	{"level":"warn","ts":"2024-10-28T12:57:10.435385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.788781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-702694\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-10-28T12:57:10.435704Z","caller":"traceutil/trace.go:171","msg":"trace[328428715] range","detail":"{range_begin:/registry/minions/no-preload-702694; range_end:; response_count:1; response_revision:574; }","duration":"324.111705ms","start":"2024-10-28T12:57:10.111578Z","end":"2024-10-28T12:57:10.435690Z","steps":["trace[328428715] 'agreement among raft nodes before linearized reading'  (duration: 323.629132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:57:10.435792Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:57:10.111545Z","time spent":"324.234481ms","remote":"127.0.0.1:45304","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4669,"request content":"key:\"/registry/minions/no-preload-702694\" "}
	{"level":"warn","ts":"2024-10-28T12:57:10.435473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.777253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" ","response":"range_response_count:1 size:1219"}
	{"level":"info","ts":"2024-10-28T12:57:10.436072Z","caller":"traceutil/trace.go:171","msg":"trace[751640292] range","detail":"{range_begin:/registry/clusterrolebindings/metrics-server:system:auth-delegator; range_end:; response_count:1; response_revision:574; }","duration":"377.379076ms","start":"2024-10-28T12:57:10.058681Z","end":"2024-10-28T12:57:10.436060Z","steps":["trace[751640292] 'agreement among raft nodes before linearized reading'  (duration: 376.737303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:57:10.436135Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:57:10.058585Z","time spent":"377.53848ms","remote":"127.0.0.1:45500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":1,"response size":1242,"request content":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" "}
	{"level":"warn","ts":"2024-10-28T12:57:10.691412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.619512ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16153447399271164984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-ztw6s.18029f2db56c3f3a\" mod_revision:558 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-ztw6s.18029f2db56c3f3a\" value_size:838 lease:6930075362416388761 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-ztw6s.18029f2db56c3f3a\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T12:57:10.691484Z","caller":"traceutil/trace.go:171","msg":"trace[1610464725] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"220.64047ms","start":"2024-10-28T12:57:10.470827Z","end":"2024-10-28T12:57:10.691467Z","steps":["trace[1610464725] 'read index received'  (duration: 92.844281ms)","trace[1610464725] 'applied index is now lower than readState.Index'  (duration: 127.795516ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T12:57:10.691601Z","caller":"traceutil/trace.go:171","msg":"trace[1235006013] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"248.306291ms","start":"2024-10-28T12:57:10.443288Z","end":"2024-10-28T12:57:10.691595Z","steps":["trace[1235006013] 'process raft request'  (duration: 120.43663ms)","trace[1235006013] 'compare'  (duration: 127.542386ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:57:10.691932Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.09571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:metrics-server\" ","response":"range_response_count:1 size:1042"}
	{"level":"info","ts":"2024-10-28T12:57:10.691959Z","caller":"traceutil/trace.go:171","msg":"trace[51561463] range","detail":"{range_begin:/registry/clusterroles/system:metrics-server; range_end:; response_count:1; response_revision:575; }","duration":"221.129911ms","start":"2024-10-28T12:57:10.470823Z","end":"2024-10-28T12:57:10.691953Z","steps":["trace[51561463] 'agreement among raft nodes before linearized reading'  (duration: 221.05782ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:57:11.081947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.793318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16153447399271164988 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" mod_revision:563 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" value_size:4738 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T12:57:11.082550Z","caller":"traceutil/trace.go:171","msg":"trace[189329261] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:611; }","duration":"363.078562ms","start":"2024-10-28T12:57:10.719416Z","end":"2024-10-28T12:57:11.082494Z","steps":["trace[189329261] 'read index received'  (duration: 232.613187ms)","trace[189329261] 'applied index is now lower than readState.Index'  (duration: 130.463097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:57:11.082786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.371172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:metrics-server\" ","response":"range_response_count:1 size:1174"}
	{"level":"info","ts":"2024-10-28T12:57:11.082850Z","caller":"traceutil/trace.go:171","msg":"trace[1064462561] range","detail":"{range_begin:/registry/clusterrolebindings/system:metrics-server; range_end:; response_count:1; response_revision:577; }","duration":"363.439841ms","start":"2024-10-28T12:57:10.719400Z","end":"2024-10-28T12:57:11.082840Z","steps":["trace[1064462561] 'agreement among raft nodes before linearized reading'  (duration: 363.315907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:57:11.082905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:57:10.719371Z","time spent":"363.524588ms","remote":"127.0.0.1:45500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":1197,"request content":"key:\"/registry/clusterrolebindings/system:metrics-server\" "}
	{"level":"info","ts":"2024-10-28T12:57:11.083166Z","caller":"traceutil/trace.go:171","msg":"trace[581326540] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"381.732966ms","start":"2024-10-28T12:57:10.701376Z","end":"2024-10-28T12:57:11.083109Z","steps":["trace[581326540] 'process raft request'  (duration: 250.71443ms)","trace[581326540] 'compare'  (duration: 129.689733ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T12:57:11.083386Z","caller":"traceutil/trace.go:171","msg":"trace[284824329] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"381.145887ms","start":"2024-10-28T12:57:10.702221Z","end":"2024-10-28T12:57:11.083367Z","steps":["trace[284824329] 'process raft request'  (duration: 380.211856ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:57:11.085181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:57:10.702210Z","time spent":"382.925032ms","remote":"127.0.0.1:45198","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":863,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.18029f2db5975cf3\" mod_revision:559 > success:<request_put:<key:\"/registry/events/default/busybox.18029f2db5975cf3\" value_size:796 lease:6930075362416388761 >> failure:<request_range:<key:\"/registry/events/default/busybox.18029f2db5975cf3\" > >"}
	{"level":"warn","ts":"2024-10-28T12:57:11.084337Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:57:10.701363Z","time spent":"382.92515ms","remote":"127.0.0.1:45320","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4789,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" mod_revision:563 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" value_size:4738 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-ws2ns\" > >"}
	{"level":"warn","ts":"2024-10-28T13:05:18.699466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.935765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:05:18.699899Z","caller":"traceutil/trace.go:171","msg":"trace[1023054071] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1019; }","duration":"160.442834ms","start":"2024-10-28T13:05:18.539421Z","end":"2024-10-28T13:05:18.699864Z","steps":["trace[1023054071] 'range keys from in-memory index tree'  (duration: 159.856902ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:07:02.906783Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-10-28T13:07:02.916564Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"9.450078ms","hash":2845446281,"current-db-size-bytes":2768896,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2768896,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-10-28T13:07:02.916621Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2845446281,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 13:10:33 up 14 min,  0 users,  load average: 0.01, 0.08, 0.08
	Linux no-preload-702694 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001] <==
	W1028 13:07:05.571695       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:07:05.571774       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:07:05.572756       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:07:05.572913       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:08:05.573817       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:08:05.574122       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:08:05.574197       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:08:05.574239       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:08:05.575343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:08:05.575420       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:10:05.576151       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 13:10:05.576177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:10:05.576574       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 13:10:05.576639       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:10:05.577736       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:10:05.577791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2] <==
	E1028 13:05:08.129192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:05:08.597030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:05:38.135451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:05:38.604335       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:06:08.141940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:06:08.612021       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:06:38.148371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:06:38.620207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:07:08.154285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:07:08.628308       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:07:38.160583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:07:38.635740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:07:48.707059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-702694"
	E1028 13:08:08.167332       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:08:08.643111       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:08:12.632890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="267.247µs"
	I1028 13:08:27.625546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="144.004µs"
	E1028 13:08:38.173248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:08:38.650423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:09:08.178846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:09:08.658846       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:09:38.185394       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:09:38.667777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:10:08.191650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:10:08.674840       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:57:06.316363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:57:06.334056       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.192"]
	E1028 12:57:06.358637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:57:06.450313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:57:06.450379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:57:06.450428       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:57:06.453872       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:57:06.454264       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:57:06.454304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:57:06.456048       1 config.go:199] "Starting service config controller"
	I1028 12:57:06.456076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:57:06.456094       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:57:06.456098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:57:06.456426       1 config.go:328] "Starting node config controller"
	I1028 12:57:06.456451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:57:06.557862       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:57:06.557921       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:57:06.557981       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5] <==
	I1028 12:57:02.327133       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:57:04.514071       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:57:04.514186       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:57:04.514202       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:57:04.514211       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:57:04.596789       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:57:04.596844       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:57:04.613008       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:57:04.615857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:57:04.615948       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:57:04.616223       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:57:04.717219       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:09:20 no-preload-702694 kubelet[1426]: E1028 13:09:20.749155    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120960748645187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:30 no-preload-702694 kubelet[1426]: E1028 13:09:30.751157    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120970750735000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:30 no-preload-702694 kubelet[1426]: E1028 13:09:30.751636    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120970750735000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:35 no-preload-702694 kubelet[1426]: E1028 13:09:35.613141    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:09:40 no-preload-702694 kubelet[1426]: E1028 13:09:40.752879    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120980752661419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:40 no-preload-702694 kubelet[1426]: E1028 13:09:40.752920    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120980752661419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:50 no-preload-702694 kubelet[1426]: E1028 13:09:50.614670    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:09:50 no-preload-702694 kubelet[1426]: E1028 13:09:50.755029    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120990754354963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:09:50 no-preload-702694 kubelet[1426]: E1028 13:09:50.755069    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730120990754354963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]: E1028 13:10:00.625280    1426 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]: E1028 13:10:00.756845    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121000756455139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:00 no-preload-702694 kubelet[1426]: E1028 13:10:00.756882    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121000756455139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:03 no-preload-702694 kubelet[1426]: E1028 13:10:03.612882    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:10:10 no-preload-702694 kubelet[1426]: E1028 13:10:10.758561    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121010758020619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:10 no-preload-702694 kubelet[1426]: E1028 13:10:10.758997    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121010758020619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:16 no-preload-702694 kubelet[1426]: E1028 13:10:16.614025    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:10:20 no-preload-702694 kubelet[1426]: E1028 13:10:20.761215    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121020760780005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:20 no-preload-702694 kubelet[1426]: E1028 13:10:20.761474    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121020760780005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:29 no-preload-702694 kubelet[1426]: E1028 13:10:29.612698    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:10:30 no-preload-702694 kubelet[1426]: E1028 13:10:30.763437    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121030763079796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:30 no-preload-702694 kubelet[1426]: E1028 13:10:30.763793    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121030763079796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b] <==
	I1028 12:57:06.216178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 12:57:36.222403       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f] <==
	I1028 12:57:36.870337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:57:36.884374       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:57:36.884653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:57:54.283683       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:57:54.283997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973!
	I1028 12:57:54.285998       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14346255-47bd-4506-9bb0-91a999062343", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973 became leader
	I1028 12:57:54.385859       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-702694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wxm6t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t: exit status 1 (61.491238ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wxm6t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (541.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 13:03:36.519747   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:04:20.376051   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818470 -n embed-certs-818470
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:11:40.775419795 +0000 UTC m=+5682.340991226
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-818470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-818470 logs -n 25: (1.082041945s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:08:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:08:22.743907  134197 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:08:22.744028  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744040  134197 out.go:358] Setting ErrFile to fd 2...
	I1028 13:08:22.744047  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744230  134197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:08:22.744750  134197 out.go:352] Setting JSON to false
	I1028 13:08:22.745654  134197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10253,"bootTime":1730110650,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:08:22.745744  134197 start.go:139] virtualization: kvm guest
	I1028 13:08:22.747939  134197 out.go:177] * [default-k8s-diff-port-783661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:08:22.749403  134197 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:08:22.749457  134197 notify.go:220] Checking for updates...
	I1028 13:08:22.751796  134197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:08:22.753005  134197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:08:22.754141  134197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:08:22.755335  134197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:08:22.756546  134197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:08:22.758122  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:08:22.758528  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.758586  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.773341  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I1028 13:08:22.773804  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.774488  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.774519  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.774851  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.775031  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.775267  134197 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:08:22.775558  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.775601  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.789667  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I1028 13:08:22.790111  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.790632  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.790659  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.791008  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.791222  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.825579  134197 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 13:08:22.826616  134197 start.go:297] selected driver: kvm2
	I1028 13:08:22.826631  134197 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.826749  134197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:08:22.827454  134197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.827533  134197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:08:22.841833  134197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:08:22.842206  134197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:08:22.842238  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:08:22.842287  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:08:22.842319  134197 start.go:340] cluster config:
	{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.842425  134197 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.844980  134197 out.go:177] * Starting "default-k8s-diff-port-783661" primary control-plane node in "default-k8s-diff-port-783661" cluster
	I1028 13:08:22.846171  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:08:22.846203  134197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:08:22.846210  134197 cache.go:56] Caching tarball of preloaded images
	I1028 13:08:22.846302  134197 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:08:22.846315  134197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:08:22.846407  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:08:22.846587  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:08:22.846633  134197 start.go:364] duration metric: took 26.842µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:08:22.846652  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:08:22.846661  134197 fix.go:54] fixHost starting: 
	I1028 13:08:22.846932  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.846968  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.860395  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1028 13:08:22.860752  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.861207  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.861239  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.861578  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.861740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.861874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:08:22.863378  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Running err=<nil>
	W1028 13:08:22.863410  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:08:22.865166  134197 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-783661" VM ...
	I1028 13:08:22.866336  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:08:22.866355  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.866529  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:08:22.869364  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.869837  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:05:00 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:08:22.869861  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.870068  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:08:22.870245  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870416  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870528  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:08:22.870703  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:08:22.870930  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:08:22.870946  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:08:25.759930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:28.831940  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:34.911959  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:37.983844  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:44.063898  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:47.135931  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:56.256018  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:59.327922  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:05.407915  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:08.479971  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:14.559886  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:17.635930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:23.711861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:26.783972  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:32.863862  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:35.935864  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:42.015884  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:45.091903  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:51.167873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:54.239919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:00.319846  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:03.391949  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:09.471853  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:12.543958  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:18.623893  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:21.695970  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:27.775910  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:30.851880  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:36.927896  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:39.999969  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:46.079860  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:49.151950  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:55.231873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:58.304033  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:04.383879  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:07.455895  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:13.535868  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:16.607992  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:22.691863  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:25.759911  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:31.839918  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:34.915917  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	
	
	==> CRI-O <==
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.361924819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121101361903301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce7b53a2-470c-4acc-8d67-545855194b22 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.362513726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9df09ae2-428b-48b3-b824-54f0b6061a3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.362577109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9df09ae2-428b-48b3-b824-54f0b6061a3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.362773906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9df09ae2-428b-48b3-b824-54f0b6061a3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.397261283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9169ab0e-13ff-419f-a5fe-77c9a36e39a2 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.397341859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9169ab0e-13ff-419f-a5fe-77c9a36e39a2 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.398602858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa46698c-aa07-4455-89df-ef8857fbcdec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.399057160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121101399033300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa46698c-aa07-4455-89df-ef8857fbcdec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.399546755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d804cbfc-ceca-48a0-9507-ad709f9d6ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.399604206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d804cbfc-ceca-48a0-9507-ad709f9d6ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.400008518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d804cbfc-ceca-48a0-9507-ad709f9d6ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.433117979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48604ac3-941c-4df8-8a1b-cfd57712e99e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.433187889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48604ac3-941c-4df8-8a1b-cfd57712e99e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.434468404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e470eeca-5c41-4fe6-b674-876dbd0e5713 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.434852475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121101434829162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e470eeca-5c41-4fe6-b674-876dbd0e5713 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.435628251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49e3040e-cf1f-41df-9b5d-78f2799431cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.435684100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49e3040e-cf1f-41df-9b5d-78f2799431cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.436190092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49e3040e-cf1f-41df-9b5d-78f2799431cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.464580699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=497a62fd-3771-4e3e-bd30-1f5d9b33a9ad name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.464666405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=497a62fd-3771-4e3e-bd30-1f5d9b33a9ad name=/runtime.v1.RuntimeService/Version
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.465676356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e64258f-fca4-4f87-acb2-a3370ca11eef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.466127629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121101466106984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e64258f-fca4-4f87-acb2-a3370ca11eef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.466635312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d24f8f5-e960-46cf-b8d4-3112fea6a55d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.466684901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d24f8f5-e960-46cf-b8d4-3112fea6a55d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:11:41 embed-certs-818470 crio[709]: time="2024-10-28 13:11:41.466880104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d24f8f5-e960-46cf-b8d4-3112fea6a55d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	216d910684a64       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   eafd327cc40dd       kube-proxy-fnp29
	51f00087da19f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fa512650272ea       storage-provisioner
	c946b22272329       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9f9b8bb378fa0       coredns-7c65d6cfc9-qcqc4
	ccb55f7524b1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ce1a1a104c081       coredns-7c65d6cfc9-dhnvt
	fd43e7c634614       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   f58535f348235       kube-controller-manager-embed-certs-818470
	a013983c01bbd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   24656799c6033       kube-scheduler-embed-certs-818470
	ae1e4e7f8dc7e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   86984d33d56b9       kube-apiserver-embed-certs-818470
	1d3bb04bab09a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   960b61c8cb894       etcd-embed-certs-818470
	bf46bfafb0773       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   ea12c1dd2e35c       kube-apiserver-embed-certs-818470
	
	
	==> coredns [c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-818470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-818470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=embed-certs-818470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T13_02_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 13:02:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-818470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:07:39 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:07:39 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:07:39 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:07:39 +0000   Mon, 28 Oct 2024 13:02:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    embed-certs-818470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb71a22bbf964c239bcc801ef66a0686
	  System UUID:                fb71a22b-bf96-4c23-9bcc-801ef66a0686
	  Boot ID:                    05767ac6-cbb1-40dd-a742-a92355748028
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dhnvt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-qcqc4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-818470                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-818470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-818470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-fnp29                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-818470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-gch8d               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-818470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node embed-certs-818470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node embed-certs-818470 event: Registered Node embed-certs-818470 in Controller
	
	
	==> dmesg <==
	[  +0.063255] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041933] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.159918] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.913825] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.564294] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.986530] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.057750] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059955] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.180714] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.141216] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.275098] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +3.759523] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +1.881414] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.061451] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.506624] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.163782] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 13:02] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +0.058103] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.008876] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.967478] systemd-fstab-generator[2902]: Ignoring "noauto" option for root device
	[  +5.858477] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[  +0.096256] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.908851] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139] <==
	{"level":"info","ts":"2024-10-28T13:02:18.378758Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2024-10-28T13:02:18.378812Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2024-10-28T13:02:18.433083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T13:02:18.433255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T13:02:18.433386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgPreVoteResp from 80a63a57d726c697 at term 1"}
	{"level":"info","ts":"2024-10-28T13:02:18.433498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T13:02:18.433529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgVoteResp from 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2024-10-28T13:02:18.433590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T13:02:18.433613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80a63a57d726c697 elected leader 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2024-10-28T13:02:18.436391Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T13:02:18.440204Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"80a63a57d726c697","local-member-attributes":"{Name:embed-certs-818470 ClientURLs:[https://192.168.50.164:2379]}","request-path":"/0/members/80a63a57d726c697/attributes","cluster-id":"d41e51b80202c3fb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T13:02:18.440366Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T13:02:18.441479Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T13:02:18.443692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T13:02:18.445046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T13:02:18.446079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T13:02:18.447065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T13:02:18.450089Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T13:02:18.449724Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.164:2379"}
	{"level":"info","ts":"2024-10-28T13:02:18.450600Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T13:02:18.457626Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T13:02:18.457922Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T13:05:21.000282Z","caller":"traceutil/trace.go:171","msg":"trace[959956370] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"135.906229ms","start":"2024-10-28T13:05:20.864336Z","end":"2024-10-28T13:05:21.000242Z","steps":["trace[959956370] 'process raft request'  (duration: 135.788142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:05:21.238789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.795141ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:05:21.238935Z","caller":"traceutil/trace.go:171","msg":"trace[1845988932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:626; }","duration":"105.026367ms","start":"2024-10-28T13:05:21.133894Z","end":"2024-10-28T13:05:21.238920Z","steps":["trace[1845988932] 'range keys from in-memory index tree'  (duration: 104.779921ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:11:41 up 14 min,  0 users,  load average: 0.34, 0.16, 0.10
	Linux embed-certs-818470 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:07:21.361952       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:07:21.362066       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:07:21.363022       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:07:21.363126       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:08:21.363416       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 13:08:21.363458       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:08:21.363773       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 13:08:21.363829       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:08:21.365003       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:08:21.365064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:10:21.365530       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:10:21.365667       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 13:10:21.365539       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:10:21.365788       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:10:21.367061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:10:21.367083       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41] <==
	W1028 13:02:10.420022       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.458107       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.461596       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.485440       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.488936       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.506155       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.613252       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.665321       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.694307       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.713176       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.753044       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.769128       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.861761       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.882857       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.121152       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.132643       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.164569       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.234519       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.383470       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:12.753571       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.699502       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.938381       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.965617       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:15.030895       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:15.154411       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78] <==
	E1028 13:06:27.376835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:06:27.806478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:06:57.383099       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:06:57.814252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:07:27.389233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:07:27.821531       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:07:39.180512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-818470"
	E1028 13:07:57.395435       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:07:57.830319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:08:20.780938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.734µs"
	E1028 13:08:27.401752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:08:27.837792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:08:33.774655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="98.615µs"
	E1028 13:08:57.407339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:08:57.848455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:09:27.413444       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:09:27.855756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:09:57.419202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:09:57.863452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:10:27.425597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:10:27.871548       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:10:57.431761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:10:57.878861       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:11:27.438108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:11:27.886650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 13:02:30.154068       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 13:02:30.162398       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.164"]
	E1028 13:02:30.162570       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 13:02:30.191092       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 13:02:30.191127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 13:02:30.191156       1 server_linux.go:169] "Using iptables Proxier"
	I1028 13:02:30.193299       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 13:02:30.193783       1 server.go:483] "Version info" version="v1.31.2"
	I1028 13:02:30.193828       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:02:30.195176       1 config.go:199] "Starting service config controller"
	I1028 13:02:30.195228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 13:02:30.195268       1 config.go:105] "Starting endpoint slice config controller"
	I1028 13:02:30.195294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 13:02:30.195737       1 config.go:328] "Starting node config controller"
	I1028 13:02:30.197463       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 13:02:30.296405       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 13:02:30.296455       1 shared_informer.go:320] Caches are synced for service config
	I1028 13:02:30.297831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7] <==
	W1028 13:02:20.394637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 13:02:20.394735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.220079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 13:02:21.220189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.227673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.227757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.241094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 13:02:21.241134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.298824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 13:02:21.298877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.350364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.350408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.413320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 13:02:21.413368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.415045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 13:02:21.415083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.509575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 13:02:21.509694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.541518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.541569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.541691       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 13:02:21.541718       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 13:02:21.603096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.603166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 13:02:23.586477       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:10:29 embed-certs-818470 kubelet[2909]: E1028 13:10:29.760468    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:10:32 embed-certs-818470 kubelet[2909]: E1028 13:10:32.914405    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121032913807338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:32 embed-certs-818470 kubelet[2909]: E1028 13:10:32.914829    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121032913807338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:42 embed-certs-818470 kubelet[2909]: E1028 13:10:42.760524    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:10:42 embed-certs-818470 kubelet[2909]: E1028 13:10:42.916112    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121042915728327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:42 embed-certs-818470 kubelet[2909]: E1028 13:10:42.916164    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121042915728327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:52 embed-certs-818470 kubelet[2909]: E1028 13:10:52.918732    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121052918329221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:52 embed-certs-818470 kubelet[2909]: E1028 13:10:52.918770    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121052918329221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:10:55 embed-certs-818470 kubelet[2909]: E1028 13:10:55.759765    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:11:02 embed-certs-818470 kubelet[2909]: E1028 13:11:02.920566    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121062920094576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:02 embed-certs-818470 kubelet[2909]: E1028 13:11:02.920844    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121062920094576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:06 embed-certs-818470 kubelet[2909]: E1028 13:11:06.760035    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:11:12 embed-certs-818470 kubelet[2909]: E1028 13:11:12.923307    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121072922891277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:12 embed-certs-818470 kubelet[2909]: E1028 13:11:12.923345    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121072922891277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:19 embed-certs-818470 kubelet[2909]: E1028 13:11:19.760582    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]: E1028 13:11:22.776450    2909 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]: E1028 13:11:22.925581    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121082925151541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:22 embed-certs-818470 kubelet[2909]: E1028 13:11:22.925652    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121082925151541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:30 embed-certs-818470 kubelet[2909]: E1028 13:11:30.761782    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:11:32 embed-certs-818470 kubelet[2909]: E1028 13:11:32.927012    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121092926528983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:11:32 embed-certs-818470 kubelet[2909]: E1028 13:11:32.927272    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121092926528983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677] <==
	I1028 13:02:29.806264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 13:02:29.837690       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 13:02:29.837741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 13:02:29.865797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 13:02:29.865964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3!
	I1028 13:02:29.866057       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1388a12-67ad-42ed-908d-5ed5e6961363", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3 became leader
	I1028 13:02:29.972174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-818470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gch8d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d: exit status 1 (58.052128ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gch8d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (541.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
E1028 13:07:13.448915   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
E1028 13:09:20.376007   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
E1028 13:12:13.449063   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
E1028 13:12:23.449140   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (231.268205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-733464" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (211.7484ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-733464 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:08:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:08:22.743907  134197 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:08:22.744028  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744040  134197 out.go:358] Setting ErrFile to fd 2...
	I1028 13:08:22.744047  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744230  134197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:08:22.744750  134197 out.go:352] Setting JSON to false
	I1028 13:08:22.745654  134197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10253,"bootTime":1730110650,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:08:22.745744  134197 start.go:139] virtualization: kvm guest
	I1028 13:08:22.747939  134197 out.go:177] * [default-k8s-diff-port-783661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:08:22.749403  134197 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:08:22.749457  134197 notify.go:220] Checking for updates...
	I1028 13:08:22.751796  134197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:08:22.753005  134197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:08:22.754141  134197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:08:22.755335  134197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:08:22.756546  134197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:08:22.758122  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:08:22.758528  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.758586  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.773341  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I1028 13:08:22.773804  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.774488  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.774519  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.774851  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.775031  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.775267  134197 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:08:22.775558  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.775601  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.789667  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I1028 13:08:22.790111  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.790632  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.790659  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.791008  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.791222  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.825579  134197 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 13:08:22.826616  134197 start.go:297] selected driver: kvm2
	I1028 13:08:22.826631  134197 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.826749  134197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:08:22.827454  134197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.827533  134197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:08:22.841833  134197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:08:22.842206  134197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:08:22.842238  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:08:22.842287  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:08:22.842319  134197 start.go:340] cluster config:
	{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.842425  134197 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.844980  134197 out.go:177] * Starting "default-k8s-diff-port-783661" primary control-plane node in "default-k8s-diff-port-783661" cluster
	I1028 13:08:22.846171  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:08:22.846203  134197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:08:22.846210  134197 cache.go:56] Caching tarball of preloaded images
	I1028 13:08:22.846302  134197 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:08:22.846315  134197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:08:22.846407  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:08:22.846587  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:08:22.846633  134197 start.go:364] duration metric: took 26.842µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:08:22.846652  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:08:22.846661  134197 fix.go:54] fixHost starting: 
	I1028 13:08:22.846932  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.846968  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.860395  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1028 13:08:22.860752  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.861207  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.861239  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.861578  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.861740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.861874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:08:22.863378  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Running err=<nil>
	W1028 13:08:22.863410  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:08:22.865166  134197 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-783661" VM ...
	I1028 13:08:22.866336  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:08:22.866355  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.866529  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:08:22.869364  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.869837  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:05:00 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:08:22.869861  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.870068  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:08:22.870245  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870416  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870528  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:08:22.870703  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:08:22.870930  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:08:22.870946  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:08:25.759930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:28.831940  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:34.911959  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:37.983844  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:44.063898  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:47.135931  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:56.256018  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:59.327922  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:05.407915  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:08.479971  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:14.559886  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:17.635930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:23.711861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:26.783972  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:32.863862  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:35.935864  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:42.015884  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:45.091903  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:51.167873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:54.239919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:00.319846  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:03.391949  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:09.471853  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:12.543958  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:18.623893  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:21.695970  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:27.775910  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:30.851880  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:36.927896  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:39.999969  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:46.079860  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:49.151950  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:55.231873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:58.304033  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:04.383879  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:07.455895  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:13.535868  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:16.607992  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:22.691863  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:25.759911  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:31.839918  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:34.915917  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:40.991816  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:44.063821  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:50.143851  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:53.215876  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:59.295883  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:02.367891  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:08.447861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:11.519919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:17.599962  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:20.671890  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:26.751894  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:29.823995  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:35.903877  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:38.975878  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:45.055820  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:48.127923  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:54.207852  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:57.279901  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:13:00.282367  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:13:00.282410  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:00.282710  134197 buildroot.go:166] provisioning hostname "default-k8s-diff-port-783661"
	I1028 13:13:00.282740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:00.282912  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:00.284376  134197 machine.go:96] duration metric: took 4m37.418023894s to provisionDockerMachine
	I1028 13:13:00.284414  134197 fix.go:56] duration metric: took 4m37.437752982s for fixHost
	I1028 13:13:00.284426  134197 start.go:83] releasing machines lock for "default-k8s-diff-port-783661", held for 4m37.437782013s
	W1028 13:13:00.284446  134197 start.go:714] error starting host: provision: host is not running
	W1028 13:13:00.284577  134197 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 13:13:00.284588  134197 start.go:729] Will try again in 5 seconds ...
	I1028 13:13:05.286973  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:13:05.287087  134197 start.go:364] duration metric: took 72.329µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:13:05.287116  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:13:05.287124  134197 fix.go:54] fixHost starting: 
	I1028 13:13:05.287464  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:05.287491  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:05.302541  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I1028 13:13:05.303110  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:05.303659  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:05.303684  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:05.304035  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:05.304229  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:05.304406  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:05.305973  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Stopped err=<nil>
	I1028 13:13:05.305996  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	W1028 13:13:05.306168  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:13:05.308037  134197 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-783661" ...
	I1028 13:13:05.309346  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Start
	I1028 13:13:05.309513  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring networks are active...
	I1028 13:13:05.310213  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network default is active
	I1028 13:13:05.310554  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network mk-default-k8s-diff-port-783661 is active
	I1028 13:13:05.311086  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Getting domain xml...
	I1028 13:13:05.311852  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Creating domain...
	I1028 13:13:06.540494  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting to get IP...
	I1028 13:13:06.541481  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.541978  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.542062  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:06.541938  135448 retry.go:31] will retry after 231.647331ms: waiting for machine to come up
	I1028 13:13:06.775409  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.775987  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.776017  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:06.775942  135448 retry.go:31] will retry after 239.756878ms: waiting for machine to come up
	I1028 13:13:07.017477  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.018004  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.018032  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:07.017953  135448 retry.go:31] will retry after 422.324589ms: waiting for machine to come up
	I1028 13:13:07.441468  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.441999  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.442037  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:07.441939  135448 retry.go:31] will retry after 578.443419ms: waiting for machine to come up
	I1028 13:13:08.021645  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.022146  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.022178  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:08.022086  135448 retry.go:31] will retry after 647.039207ms: waiting for machine to come up
	I1028 13:13:08.670333  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.670868  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.670892  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:08.670811  135448 retry.go:31] will retry after 714.058494ms: waiting for machine to come up
	I1028 13:13:09.386779  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:09.387215  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:09.387243  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:09.387168  135448 retry.go:31] will retry after 894.856792ms: waiting for machine to come up
	I1028 13:13:10.283188  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:10.283686  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:10.283718  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:10.283624  135448 retry.go:31] will retry after 1.265291459s: waiting for machine to come up
	I1028 13:13:11.550244  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:11.550726  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:11.550749  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:11.550654  135448 retry.go:31] will retry after 1.249743184s: waiting for machine to come up
	I1028 13:13:12.801975  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:12.802396  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:12.802410  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:12.802366  135448 retry.go:31] will retry after 2.31180583s: waiting for machine to come up
	I1028 13:13:15.116926  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:15.117467  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:15.117496  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:15.117428  135448 retry.go:31] will retry after 2.267258035s: waiting for machine to come up
	I1028 13:13:17.387100  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:17.387516  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:17.387548  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:17.387478  135448 retry.go:31] will retry after 2.277192393s: waiting for machine to come up
	I1028 13:13:19.666742  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:19.667120  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:19.667150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:19.667075  135448 retry.go:31] will retry after 3.233541624s: waiting for machine to come up
	I1028 13:13:22.903660  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.904189  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Found IP for machine: 192.168.61.58
	I1028 13:13:22.904219  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has current primary IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.904225  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Reserving static IP address...
	I1028 13:13:22.904647  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-783661", mac: "52:54:00:07:89:7c", ip: "192.168.61.58"} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:22.904690  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | skip adding static IP to network mk-default-k8s-diff-port-783661 - found existing host DHCP lease matching {name: "default-k8s-diff-port-783661", mac: "52:54:00:07:89:7c", ip: "192.168.61.58"}
	I1028 13:13:22.904721  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Reserved static IP address: 192.168.61.58
	I1028 13:13:22.904740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for SSH to be available...
	I1028 13:13:22.904756  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Getting to WaitForSSH function...
	I1028 13:13:22.906960  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.907271  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:22.907295  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.907443  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Using SSH client type: external
	I1028 13:13:22.907469  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa (-rw-------)
	I1028 13:13:22.907494  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 13:13:22.907504  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | About to run SSH command:
	I1028 13:13:22.907526  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | exit 0
	I1028 13:13:23.027352  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | SSH cmd err, output: <nil>: 
	I1028 13:13:23.027735  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetConfigRaw
	I1028 13:13:23.028363  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:23.031114  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.031475  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.031508  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.031772  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:13:23.031996  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:13:23.032018  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:23.032261  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.034841  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.035229  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.035258  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.035396  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.035574  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.035752  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.035900  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.036048  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.036241  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.036252  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:13:23.131447  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 13:13:23.131477  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.131732  134197 buildroot.go:166] provisioning hostname "default-k8s-diff-port-783661"
	I1028 13:13:23.131767  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.131952  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.134431  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.134729  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.134755  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.134875  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.135054  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.135195  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.135337  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.135498  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.135705  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.135726  134197 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-783661 && echo "default-k8s-diff-port-783661" | sudo tee /etc/hostname
	I1028 13:13:23.244094  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-783661
	
	I1028 13:13:23.244135  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.246707  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.247039  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.247069  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.247226  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.247405  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.247545  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.247664  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.247836  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.248022  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.248046  134197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-783661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-783661/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-783661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 13:13:23.351444  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:13:23.351480  134197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 13:13:23.351510  134197 buildroot.go:174] setting up certificates
	I1028 13:13:23.351526  134197 provision.go:84] configureAuth start
	I1028 13:13:23.351536  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.351842  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:23.354294  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.354607  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.354633  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.354785  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.356931  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.357242  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.357263  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.357408  134197 provision.go:143] copyHostCerts
	I1028 13:13:23.357480  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 13:13:23.357494  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 13:13:23.357556  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 13:13:23.357663  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 13:13:23.357671  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 13:13:23.357697  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 13:13:23.357770  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 13:13:23.357777  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 13:13:23.357803  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 13:13:23.357864  134197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-783661 san=[127.0.0.1 192.168.61.58 default-k8s-diff-port-783661 localhost minikube]
	I1028 13:13:23.500838  134197 provision.go:177] copyRemoteCerts
	I1028 13:13:23.500902  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 13:13:23.500927  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.503917  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.504289  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.504316  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.504498  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.504694  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.504874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.505018  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:23.580704  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 13:13:23.602410  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 13:13:23.623660  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 13:13:23.646050  134197 provision.go:87] duration metric: took 294.509447ms to configureAuth
	I1028 13:13:23.646084  134197 buildroot.go:189] setting minikube options for container-runtime
	I1028 13:13:23.646294  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:13:23.646385  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.649055  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.649434  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.649465  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.649715  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.649912  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.650067  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.650166  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.650329  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.650512  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.650530  134197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 13:13:23.853315  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 13:13:23.853340  134197 machine.go:96] duration metric: took 821.330249ms to provisionDockerMachine
	I1028 13:13:23.853353  134197 start.go:293] postStartSetup for "default-k8s-diff-port-783661" (driver="kvm2")
	I1028 13:13:23.853365  134197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 13:13:23.853409  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:23.853730  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 13:13:23.853758  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.856419  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.856746  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.856777  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.856883  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.857052  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.857219  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.857341  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:23.933578  134197 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 13:13:23.937169  134197 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 13:13:23.937202  134197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 13:13:23.937278  134197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 13:13:23.937367  134197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 13:13:23.937486  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 13:13:23.945951  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:13:23.967255  134197 start.go:296] duration metric: took 113.888302ms for postStartSetup
	I1028 13:13:23.967294  134197 fix.go:56] duration metric: took 18.680170342s for fixHost
	I1028 13:13:23.967316  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.969931  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.970289  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.970319  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.970502  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.970696  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.970868  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.970994  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.971144  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.971347  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.971362  134197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 13:13:24.067579  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730121204.042610744
	
	I1028 13:13:24.067601  134197 fix.go:216] guest clock: 1730121204.042610744
	I1028 13:13:24.067610  134197 fix.go:229] Guest: 2024-10-28 13:13:24.042610744 +0000 UTC Remote: 2024-10-28 13:13:23.967298865 +0000 UTC m=+301.263399635 (delta=75.311879ms)
	I1028 13:13:24.067656  134197 fix.go:200] guest clock delta is within tolerance: 75.311879ms
	I1028 13:13:24.067663  134197 start.go:83] releasing machines lock for "default-k8s-diff-port-783661", held for 18.78056169s
	I1028 13:13:24.067691  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.067935  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:24.070598  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.070986  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.071026  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.071308  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.071858  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.072056  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.072173  134197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 13:13:24.072241  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:24.072334  134197 ssh_runner.go:195] Run: cat /version.json
	I1028 13:13:24.072362  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:24.075272  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075444  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075579  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.075605  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075743  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:24.075831  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.075864  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075885  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:24.076024  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:24.076073  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:24.076150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:24.076220  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:24.076318  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:24.076449  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:24.148656  134197 ssh_runner.go:195] Run: systemctl --version
	I1028 13:13:24.173826  134197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 13:13:24.314420  134197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 13:13:24.320964  134197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 13:13:24.321040  134197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 13:13:24.336093  134197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 13:13:24.336114  134197 start.go:495] detecting cgroup driver to use...
	I1028 13:13:24.336176  134197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 13:13:24.355586  134197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 13:13:24.369613  134197 docker.go:217] disabling cri-docker service (if available) ...
	I1028 13:13:24.369661  134197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 13:13:24.383661  134197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 13:13:24.397552  134197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 13:13:24.517746  134197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 13:13:24.667013  134197 docker.go:233] disabling docker service ...
	I1028 13:13:24.667115  134197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 13:13:24.680756  134197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 13:13:24.692610  134197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 13:13:24.812530  134197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 13:13:24.921788  134197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 13:13:24.934431  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 13:13:24.950796  134197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 13:13:24.950855  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.959904  134197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 13:13:24.959974  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.968923  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.977711  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.986789  134197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 13:13:24.996658  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.005472  134197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.020549  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.029317  134197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 13:13:25.037514  134197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 13:13:25.037614  134197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 13:13:25.050018  134197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 13:13:25.058328  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:25.164529  134197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 13:13:25.248691  134197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 13:13:25.248759  134197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 13:13:25.252922  134197 start.go:563] Will wait 60s for crictl version
	I1028 13:13:25.252997  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:13:25.256182  134197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 13:13:25.294375  134197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 13:13:25.294522  134197 ssh_runner.go:195] Run: crio --version
	I1028 13:13:25.321489  134197 ssh_runner.go:195] Run: crio --version
	I1028 13:13:25.349730  134197 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 13:13:25.351032  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:25.353570  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:25.353919  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:25.353944  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:25.354159  134197 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 13:13:25.357796  134197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:13:25.369212  134197 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 13:13:25.369364  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:13:25.369421  134197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:13:25.400975  134197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 13:13:25.401039  134197 ssh_runner.go:195] Run: which lz4
	I1028 13:13:25.404590  134197 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 13:13:25.408131  134197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 13:13:25.408164  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 13:13:26.592887  134197 crio.go:462] duration metric: took 1.18831143s to copy over tarball
	I1028 13:13:26.592984  134197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 13:13:28.669692  134197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.07667117s)
	I1028 13:13:28.669728  134197 crio.go:469] duration metric: took 2.076802189s to extract the tarball
	I1028 13:13:28.669739  134197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 13:13:28.705768  134197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:13:28.746918  134197 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 13:13:28.746943  134197 cache_images.go:84] Images are preloaded, skipping loading
	I1028 13:13:28.746953  134197 kubeadm.go:934] updating node { 192.168.61.58 8444 v1.31.2 crio true true} ...
	I1028 13:13:28.747105  134197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-783661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 13:13:28.747193  134197 ssh_runner.go:195] Run: crio config
	I1028 13:13:28.799814  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:13:28.799844  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:13:28.799866  134197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 13:13:28.799905  134197 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-783661 NodeName:default-k8s-diff-port-783661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 13:13:28.800138  134197 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-783661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 13:13:28.800228  134197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 13:13:28.809781  134197 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 13:13:28.809860  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 13:13:28.818307  134197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 13:13:28.833165  134197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 13:13:28.847557  134197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 13:13:28.862507  134197 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I1028 13:13:28.865883  134197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:13:28.876993  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:29.010474  134197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:13:29.026282  134197 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661 for IP: 192.168.61.58
	I1028 13:13:29.026319  134197 certs.go:194] generating shared ca certs ...
	I1028 13:13:29.026341  134197 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:29.026554  134197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 13:13:29.026615  134197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 13:13:29.026635  134197 certs.go:256] generating profile certs ...
	I1028 13:13:29.026770  134197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/client.key
	I1028 13:13:29.026859  134197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.key.2140521c
	I1028 13:13:29.026902  134197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.key
	I1028 13:13:29.027067  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 13:13:29.027113  134197 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 13:13:29.027129  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 13:13:29.027183  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 13:13:29.027218  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 13:13:29.027256  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 13:13:29.027314  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:13:29.028337  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 13:13:29.059748  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 13:13:29.090749  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 13:13:29.118669  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 13:13:29.145013  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 13:13:29.176049  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 13:13:29.199479  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 13:13:29.225368  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 13:13:29.248427  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 13:13:29.270163  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 13:13:29.291310  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 13:13:29.313075  134197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 13:13:29.329050  134197 ssh_runner.go:195] Run: openssl version
	I1028 13:13:29.334785  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 13:13:29.345731  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.349902  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.349950  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.355107  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 13:13:29.364475  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 13:13:29.373697  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.377792  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.377850  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.382892  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 13:13:29.392054  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 13:13:29.402513  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.406438  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.406511  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.411444  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 13:13:29.420742  134197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 13:13:29.428743  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 13:13:29.435065  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 13:13:29.440678  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 13:13:29.445930  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 13:13:29.451012  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 13:13:29.456345  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 13:13:29.461609  134197 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:13:29.461691  134197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 13:13:29.461725  134197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:13:29.496024  134197 cri.go:89] found id: ""
	I1028 13:13:29.496095  134197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 13:13:29.505387  134197 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 13:13:29.505404  134197 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 13:13:29.505449  134197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 13:13:29.514612  134197 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 13:13:29.515716  134197 kubeconfig.go:125] found "default-k8s-diff-port-783661" server: "https://192.168.61.58:8444"
	I1028 13:13:29.518400  134197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 13:13:29.527127  134197 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.58
	I1028 13:13:29.527152  134197 kubeadm.go:1160] stopping kube-system containers ...
	I1028 13:13:29.527165  134197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 13:13:29.527207  134197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:13:29.562704  134197 cri.go:89] found id: ""
	I1028 13:13:29.562779  134197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 13:13:29.579423  134197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:13:29.588397  134197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:13:29.588431  134197 kubeadm.go:157] found existing configuration files:
	
	I1028 13:13:29.588480  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 13:13:29.597602  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:13:29.597671  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:13:29.606595  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 13:13:29.614682  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:13:29.614734  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:13:29.622987  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 13:13:29.630860  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:13:29.630910  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:13:29.639251  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 13:13:29.647268  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:13:29.647317  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:13:29.655608  134197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:13:29.664127  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:29.763979  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.190931  134197 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.426908776s)
	I1028 13:13:31.190975  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.380916  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.444452  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.511848  134197 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:13:31.511952  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:32.013005  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:32.512883  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:33.012777  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:33.048586  134197 api_server.go:72] duration metric: took 1.536736279s to wait for apiserver process to appear ...
	I1028 13:13:33.048616  134197 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:13:33.048643  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:33.049178  134197 api_server.go:269] stopped: https://192.168.61.58:8444/healthz: Get "https://192.168.61.58:8444/healthz": dial tcp 192.168.61.58:8444: connect: connection refused
	I1028 13:13:33.548706  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.090092  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 13:13:36.090127  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 13:13:36.090145  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.149045  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:36.149077  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:36.549621  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.555510  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:36.555539  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:37.049002  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:37.057764  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:37.057791  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:37.549545  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:37.554197  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 200:
	ok
	I1028 13:13:37.564130  134197 api_server.go:141] control plane version: v1.31.2
	I1028 13:13:37.564158  134197 api_server.go:131] duration metric: took 4.515535111s to wait for apiserver health ...
	I1028 13:13:37.564168  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:13:37.564174  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:13:37.566201  134197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 13:13:37.567535  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 13:13:37.577171  134197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 13:13:37.594324  134197 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:13:37.605014  134197 system_pods.go:59] 8 kube-system pods found
	I1028 13:13:37.605066  134197 system_pods.go:61] "coredns-7c65d6cfc9-x8gvd" [4498824f-7ce1-4167-8701-74cadd3fa83c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 13:13:37.605076  134197 system_pods.go:61] "etcd-default-k8s-diff-port-783661" [9a8a5a39-b0bb-4144-9e70-98fed2bbc838] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 13:13:37.605083  134197 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-783661" [e221604a-5b54-4755-952d-0c699167f402] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 13:13:37.605089  134197 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-783661" [95e9472e-3c24-4fd8-b79c-949d8cd980da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 13:13:37.605101  134197 system_pods.go:61] "kube-proxy-ff797" [ed2dce0b-4dc9-406e-a9c3-f91d75fa0876] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 13:13:37.605106  134197 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-783661" [7cab2cef-dacb-4943-9564-a1a625afa198] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 13:13:37.605113  134197 system_pods.go:61] "metrics-server-6867b74b74-rkx62" [31c37fb4-0650-481d-b1e3-4956769450d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 13:13:37.605118  134197 system_pods.go:61] "storage-provisioner" [21a53238-251d-4581-b4c3-3a788545ab0c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 13:13:37.605127  134197 system_pods.go:74] duration metric: took 10.78446ms to wait for pod list to return data ...
	I1028 13:13:37.605135  134197 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:13:37.610793  134197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:13:37.610817  134197 node_conditions.go:123] node cpu capacity is 2
	I1028 13:13:37.610830  134197 node_conditions.go:105] duration metric: took 5.689372ms to run NodePressure ...
	I1028 13:13:37.610855  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:37.889577  134197 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 13:13:37.893705  134197 kubeadm.go:739] kubelet initialised
	I1028 13:13:37.893729  134197 kubeadm.go:740] duration metric: took 4.119893ms waiting for restarted kubelet to initialise ...
	I1028 13:13:37.893753  134197 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:37.899304  134197 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.903662  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.903687  134197 pod_ready.go:82] duration metric: took 4.360023ms for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.903698  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.903710  134197 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.907223  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.907239  134197 pod_ready.go:82] duration metric: took 3.518315ms for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.907251  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.907257  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.911026  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.911043  134197 pod_ready.go:82] duration metric: took 3.780236ms for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.911051  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.911057  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.997939  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.997962  134197 pod_ready.go:82] duration metric: took 86.896486ms for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.997972  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.997979  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:38.397652  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-proxy-ff797" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.397683  134197 pod_ready.go:82] duration metric: took 399.693086ms for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:38.397694  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-proxy-ff797" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.397701  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:38.797922  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.797955  134197 pod_ready.go:82] duration metric: took 400.242965ms for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:38.797985  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.797997  134197 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:39.197558  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:39.197592  134197 pod_ready.go:82] duration metric: took 399.575732ms for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:39.197604  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:39.197612  134197 pod_ready.go:39] duration metric: took 1.303837299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:39.197634  134197 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:13:39.210450  134197 ops.go:34] apiserver oom_adj: -16
	I1028 13:13:39.210472  134197 kubeadm.go:597] duration metric: took 9.705061723s to restartPrimaryControlPlane
	I1028 13:13:39.210482  134197 kubeadm.go:394] duration metric: took 9.74887869s to StartCluster
	I1028 13:13:39.210501  134197 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:39.210585  134197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:13:39.212960  134197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:39.213234  134197 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:13:39.213297  134197 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:13:39.213409  134197 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213413  134197 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213441  134197 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.213454  134197 addons.go:243] addon storage-provisioner should already be in state true
	I1028 13:13:39.213453  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:13:39.213461  134197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-783661"
	I1028 13:13:39.213485  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.213475  134197 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213526  134197 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.213543  134197 addons.go:243] addon metrics-server should already be in state true
	I1028 13:13:39.213616  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.213951  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.213989  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.214006  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.214039  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.213996  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.214110  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.215292  134197 out.go:177] * Verifying Kubernetes components...
	I1028 13:13:39.216619  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:39.229952  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I1028 13:13:39.230093  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I1028 13:13:39.230210  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1028 13:13:39.230480  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.230884  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.231128  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.231197  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.231222  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.231663  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.231736  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.231756  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.232343  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.232410  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.232469  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.233021  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.233049  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.234199  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.234229  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.234607  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.234787  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.238467  134197 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.238500  134197 addons.go:243] addon default-storageclass should already be in state true
	I1028 13:13:39.238532  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.238939  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.238985  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.248564  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1028 13:13:39.249000  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.249552  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.249568  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1028 13:13:39.249576  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.249955  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.250011  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.250348  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.250466  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.250482  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.250839  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.251157  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.252090  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.252962  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.254247  134197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:13:39.255072  134197 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 13:13:39.256106  134197 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:13:39.256129  134197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:13:39.256150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.256715  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 13:13:39.256730  134197 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 13:13:39.256746  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.259364  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I1028 13:13:39.260132  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.260238  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260596  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.260617  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260758  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.260778  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.260842  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260892  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.261059  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.261210  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.261234  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.261247  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.261344  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.261496  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.261657  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.261763  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.261871  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.261879  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.262448  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.262479  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.308139  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I1028 13:13:39.308709  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.309316  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.309344  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.309738  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.309932  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.311478  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.311716  134197 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:13:39.311733  134197 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:13:39.311751  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.314701  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.315147  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.315181  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.315333  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.315519  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.315697  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.315849  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.393200  134197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:13:39.408534  134197 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-783661" to be "Ready" ...
	I1028 13:13:39.501187  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:13:39.531748  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:13:39.544393  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 13:13:39.544418  134197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 13:13:39.594981  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 13:13:39.595012  134197 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 13:13:39.618922  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 13:13:39.618951  134197 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 13:13:39.638636  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 13:13:39.962178  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.962205  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.962485  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.962504  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.962519  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:39.962537  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.962548  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.962750  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.962766  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:39.962792  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.972199  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.972221  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.972480  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.972491  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.972502  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.655075  134197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123283251s)
	I1028 13:13:40.655142  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.655155  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.655454  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.655502  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.655511  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.655525  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.655553  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.655901  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.655913  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.655927  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747119  134197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.108438664s)
	I1028 13:13:40.747181  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.747196  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.747501  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.747517  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.747530  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747539  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.747547  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.747800  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.747821  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.747844  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747865  134197 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-783661"
	I1028 13:13:40.749733  134197 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 13:13:40.750923  134197 addons.go:510] duration metric: took 1.53763073s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 13:13:41.413083  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:43.912827  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:46.412268  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:46.913460  134197 node_ready.go:49] node "default-k8s-diff-port-783661" has status "Ready":"True"
	I1028 13:13:46.913489  134197 node_ready.go:38] duration metric: took 7.504910707s for node "default-k8s-diff-port-783661" to be "Ready" ...
	I1028 13:13:46.913499  134197 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:46.918312  134197 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.926982  134197 pod_ready.go:93] pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.927003  134197 pod_ready.go:82] duration metric: took 8.667996ms for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.927014  134197 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.931410  134197 pod_ready.go:93] pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.931429  134197 pod_ready.go:82] duration metric: took 4.406844ms for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.931437  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.939500  134197 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.939520  134197 pod_ready.go:82] duration metric: took 8.077556ms for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.939529  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:47.945396  134197 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:47.945424  134197 pod_ready.go:82] duration metric: took 1.005888192s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:47.945434  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.113116  134197 pod_ready.go:93] pod "kube-proxy-ff797" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:48.113139  134197 pod_ready.go:82] duration metric: took 167.697182ms for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.113152  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.513307  134197 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:48.513333  134197 pod_ready.go:82] duration metric: took 400.171263ms for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.513347  134197 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:50.519958  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:53.019212  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:55.519405  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:58.020739  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:00.520634  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:03.020065  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:05.520194  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.837327913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121249837301649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6580e831-51e2-4d4f-8a2d-b2b0c8d45d56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.837869165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8105ab65-38ea-419e-b544-dc00e41666f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.837943197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8105ab65-38ea-419e-b544-dc00e41666f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.837978481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8105ab65-38ea-419e-b544-dc00e41666f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.867393299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=445dbee5-827f-49e6-a93b-bfbd73079660 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.867469875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=445dbee5-827f-49e6-a93b-bfbd73079660 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.868424330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ec6f387-4981-4698-8810-667b5f430ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.868814693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121249868759307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ec6f387-4981-4698-8810-667b5f430ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.869272703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb513899-4fd4-47f9-b926-2cdbd1b24488 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.869316272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb513899-4fd4-47f9-b926-2cdbd1b24488 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.869358689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eb513899-4fd4-47f9-b926-2cdbd1b24488 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.897036195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32249ee5-2607-4986-baba-fb9151ec07df name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.897093698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32249ee5-2607-4986-baba-fb9151ec07df name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.898757371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74a067bd-89f4-4da4-b1ab-06f6a9e3df56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.899247037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121249899223169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74a067bd-89f4-4da4-b1ab-06f6a9e3df56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.899878426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c63e2481-2d13-4713-b686-65a12d61ea67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.899953505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c63e2481-2d13-4713-b686-65a12d61ea67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.900008778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c63e2481-2d13-4713-b686-65a12d61ea67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.928881663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9728057-546d-4cbf-a864-1fd4e48fac2f name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.928948546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9728057-546d-4cbf-a864-1fd4e48fac2f name=/runtime.v1.RuntimeService/Version
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.930367619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=387ca35f-9305-4d69-82d4-922470cbed14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.930865573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121249930844236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=387ca35f-9305-4d69-82d4-922470cbed14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.931488307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e7f6299-5a50-4a9d-b3a2-833e0e826533 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.931539070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e7f6299-5a50-4a9d-b3a2-833e0e826533 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:14:09 old-k8s-version-733464 crio[631]: time="2024-10-28 13:14:09.931568476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e7f6299-5a50-4a9d-b3a2-833e0e826533 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053749] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037595] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.915680] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.519083] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 12:57] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.070642] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061498] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.188572] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.147125] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361094] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069839] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.013009] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.125884] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 13:01] systemd-fstab-generator[5140]: Ignoring "noauto" option for root device
	[Oct28 13:03] systemd-fstab-generator[5420]: Ignoring "noauto" option for root device
	[  +0.055820] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:14:10 up 17 min,  0 users,  load average: 0.00, 0.01, 0.02
	Linux old-k8s-version-733464 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: net.(*sysDialer).dialSerial(0xc000eb6000, 0x4f7fe40, 0xc000c6d8c0, 0xc00091c710, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/dial.go:548 +0x152
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: net.(*Dialer).DialContext(0xc000c25ce0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000e942d0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c41520, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000e942d0, 0x24, 0x60, 0x7fad0c6e66d8, 0x118, ...)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: net/http.(*Transport).dial(0xc000891040, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000e942d0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: net/http.(*Transport).dialConn(0xc000891040, 0x4f7fe00, 0xc000120018, 0x0, 0xc0002f8c00, 0x5, 0xc000e942d0, 0x24, 0x0, 0xc000532ea0, ...)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: net/http.(*Transport).dialConnFor(0xc000891040, 0xc000ba6160)
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]: created by net/http.(*Transport).queueForDial
	Oct 28 13:14:07 old-k8s-version-733464 kubelet[6601]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 28 13:14:07 old-k8s-version-733464 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 13:14:07 old-k8s-version-733464 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 13:14:08 old-k8s-version-733464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 28 13:14:08 old-k8s-version-733464 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 13:14:08 old-k8s-version-733464 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 13:14:08 old-k8s-version-733464 kubelet[6610]: I1028 13:14:08.407999    6610 server.go:416] Version: v1.20.0
	Oct 28 13:14:08 old-k8s-version-733464 kubelet[6610]: I1028 13:14:08.408326    6610 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 13:14:08 old-k8s-version-733464 kubelet[6610]: I1028 13:14:08.410258    6610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 13:14:08 old-k8s-version-733464 kubelet[6610]: I1028 13:14:08.411228    6610 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 28 13:14:08 old-k8s-version-733464 kubelet[6610]: W1028 13:14:08.411257    6610 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (215.165597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-733464" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-783661 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-783661 --alsologtostderr -v=3: exit status 82 (2m0.507648721s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-783661"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 13:05:51.357661  133560 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:05:51.357912  133560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:05:51.357922  133560 out.go:358] Setting ErrFile to fd 2...
	I1028 13:05:51.357927  133560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:05:51.358130  133560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:05:51.358347  133560 out.go:352] Setting JSON to false
	I1028 13:05:51.358431  133560 mustload.go:65] Loading cluster: default-k8s-diff-port-783661
	I1028 13:05:51.358779  133560 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:05:51.358855  133560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:05:51.359016  133560 mustload.go:65] Loading cluster: default-k8s-diff-port-783661
	I1028 13:05:51.359111  133560 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:05:51.359135  133560 stop.go:39] StopHost: default-k8s-diff-port-783661
	I1028 13:05:51.359466  133560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:05:51.359518  133560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:05:51.374902  133560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1028 13:05:51.375501  133560 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:05:51.376153  133560 main.go:141] libmachine: Using API Version  1
	I1028 13:05:51.376176  133560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:05:51.376581  133560 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:05:51.378694  133560 out.go:177] * Stopping node "default-k8s-diff-port-783661"  ...
	I1028 13:05:51.379929  133560 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 13:05:51.379975  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:05:51.380222  133560 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 13:05:51.380268  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:05:51.383067  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:05:51.383504  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:05:00 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:05:51.383533  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:05:51.383712  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:05:51.383874  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:05:51.384048  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:05:51.384199  133560 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:05:51.471161  133560 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 13:05:51.542200  133560 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 13:05:51.610248  133560 main.go:141] libmachine: Stopping "default-k8s-diff-port-783661"...
	I1028 13:05:51.610294  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:05:51.611946  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Stop
	I1028 13:05:51.615207  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 0/120
	I1028 13:05:52.616611  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 1/120
	I1028 13:05:53.618038  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 2/120
	I1028 13:05:54.619525  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 3/120
	I1028 13:05:55.620914  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 4/120
	I1028 13:05:56.623054  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 5/120
	I1028 13:05:57.624473  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 6/120
	I1028 13:05:58.626764  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 7/120
	I1028 13:05:59.629057  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 8/120
	I1028 13:06:00.630599  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 9/120
	I1028 13:06:01.633176  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 10/120
	I1028 13:06:02.634392  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 11/120
	I1028 13:06:03.635848  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 12/120
	I1028 13:06:04.637537  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 13/120
	I1028 13:06:05.639468  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 14/120
	I1028 13:06:06.641457  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 15/120
	I1028 13:06:07.642744  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 16/120
	I1028 13:06:08.644564  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 17/120
	I1028 13:06:09.645868  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 18/120
	I1028 13:06:10.647246  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 19/120
	I1028 13:06:11.649364  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 20/120
	I1028 13:06:12.650754  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 21/120
	I1028 13:06:13.652188  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 22/120
	I1028 13:06:14.654454  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 23/120
	I1028 13:06:15.655713  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 24/120
	I1028 13:06:16.657636  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 25/120
	I1028 13:06:17.659023  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 26/120
	I1028 13:06:18.660320  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 27/120
	I1028 13:06:19.662214  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 28/120
	I1028 13:06:20.663716  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 29/120
	I1028 13:06:21.665888  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 30/120
	I1028 13:06:22.667432  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 31/120
	I1028 13:06:23.668804  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 32/120
	I1028 13:06:24.670210  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 33/120
	I1028 13:06:25.671651  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 34/120
	I1028 13:06:26.673081  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 35/120
	I1028 13:06:27.674459  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 36/120
	I1028 13:06:28.675671  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 37/120
	I1028 13:06:29.677105  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 38/120
	I1028 13:06:30.678417  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 39/120
	I1028 13:06:31.680537  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 40/120
	I1028 13:06:32.682012  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 41/120
	I1028 13:06:33.683340  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 42/120
	I1028 13:06:34.685157  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 43/120
	I1028 13:06:35.686695  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 44/120
	I1028 13:06:36.688689  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 45/120
	I1028 13:06:37.689894  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 46/120
	I1028 13:06:38.691389  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 47/120
	I1028 13:06:39.692679  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 48/120
	I1028 13:06:40.694266  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 49/120
	I1028 13:06:41.696459  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 50/120
	I1028 13:06:42.697997  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 51/120
	I1028 13:06:43.699375  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 52/120
	I1028 13:06:44.701331  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 53/120
	I1028 13:06:45.702752  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 54/120
	I1028 13:06:46.704856  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 55/120
	I1028 13:06:47.706175  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 56/120
	I1028 13:06:48.707656  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 57/120
	I1028 13:06:49.708960  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 58/120
	I1028 13:06:50.711375  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 59/120
	I1028 13:06:51.713552  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 60/120
	I1028 13:06:52.714973  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 61/120
	I1028 13:06:53.716297  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 62/120
	I1028 13:06:54.717983  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 63/120
	I1028 13:06:55.719546  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 64/120
	I1028 13:06:56.721667  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 65/120
	I1028 13:06:57.723685  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 66/120
	I1028 13:06:58.725233  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 67/120
	I1028 13:06:59.726469  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 68/120
	I1028 13:07:00.727963  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 69/120
	I1028 13:07:01.730153  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 70/120
	I1028 13:07:02.731446  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 71/120
	I1028 13:07:03.733072  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 72/120
	I1028 13:07:04.734212  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 73/120
	I1028 13:07:05.735815  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 74/120
	I1028 13:07:06.737777  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 75/120
	I1028 13:07:07.738928  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 76/120
	I1028 13:07:08.740261  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 77/120
	I1028 13:07:09.741931  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 78/120
	I1028 13:07:10.743515  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 79/120
	I1028 13:07:11.745919  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 80/120
	I1028 13:07:12.747309  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 81/120
	I1028 13:07:13.748797  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 82/120
	I1028 13:07:14.750929  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 83/120
	I1028 13:07:15.752411  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 84/120
	I1028 13:07:16.754186  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 85/120
	I1028 13:07:17.755615  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 86/120
	I1028 13:07:18.756893  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 87/120
	I1028 13:07:19.758316  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 88/120
	I1028 13:07:20.759692  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 89/120
	I1028 13:07:21.761846  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 90/120
	I1028 13:07:22.763404  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 91/120
	I1028 13:07:23.765006  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 92/120
	I1028 13:07:24.766385  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 93/120
	I1028 13:07:25.767957  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 94/120
	I1028 13:07:26.770129  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 95/120
	I1028 13:07:27.771567  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 96/120
	I1028 13:07:28.773111  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 97/120
	I1028 13:07:29.774561  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 98/120
	I1028 13:07:30.776653  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 99/120
	I1028 13:07:31.778891  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 100/120
	I1028 13:07:32.780330  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 101/120
	I1028 13:07:33.781986  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 102/120
	I1028 13:07:34.783561  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 103/120
	I1028 13:07:35.784925  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 104/120
	I1028 13:07:36.787129  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 105/120
	I1028 13:07:37.788346  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 106/120
	I1028 13:07:38.789601  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 107/120
	I1028 13:07:39.790769  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 108/120
	I1028 13:07:40.792116  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 109/120
	I1028 13:07:41.794059  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 110/120
	I1028 13:07:42.795476  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 111/120
	I1028 13:07:43.796804  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 112/120
	I1028 13:07:44.799022  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 113/120
	I1028 13:07:45.800276  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 114/120
	I1028 13:07:46.802243  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 115/120
	I1028 13:07:47.803576  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 116/120
	I1028 13:07:48.804923  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 117/120
	I1028 13:07:49.806278  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 118/120
	I1028 13:07:50.807580  133560 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for machine to stop 119/120
	I1028 13:07:51.808733  133560 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 13:07:51.808826  133560 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 13:07:51.810644  133560 out.go:201] 
	W1028 13:07:51.812108  133560 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 13:07:51.812130  133560 out.go:270] * 
	* 
	W1028 13:07:51.815282  133560 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 13:07:51.816639  133560 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-783661 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661: exit status 3 (18.486116281s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 13:08:10.303988  133993 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E1028 13:08:10.304012  133993 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-783661" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661: exit status 3 (3.17204562s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 13:08:13.475984  134072 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E1028 13:08:13.476008  134072 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-783661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-783661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149118498s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-783661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661: exit status 3 (3.068047371s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 13:08:22.692022  134151 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E1028 13:08:22.692054  134151 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-783661" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (430.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-702694 -n no-preload-702694
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:17:43.246355 +0000 UTC m=+6044.811926441
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-702694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-702694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.552µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-702694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-702694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-702694 logs -n 25: (1.115111456s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 13:15 UTC | 28 Oct 24 13:15 UTC |
	| start   | -p newest-cni-051506 --memory=2200 --alsologtostderr   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:15 UTC | 28 Oct 24 13:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-051506             | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-051506                  | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-051506 --memory=2200 --alsologtostderr   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-051506 image list                           | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	| delete  | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	| start   | -p auto-297280 --memory=3072                           | auto-297280                  | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:17:37
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:17:37.436165  137683 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:17:37.436475  137683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:17:37.436487  137683 out.go:358] Setting ErrFile to fd 2...
	I1028 13:17:37.436494  137683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:17:37.436716  137683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:17:37.437255  137683 out.go:352] Setting JSON to false
	I1028 13:17:37.438221  137683 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10807,"bootTime":1730110650,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:17:37.438315  137683 start.go:139] virtualization: kvm guest
	I1028 13:17:37.440455  137683 out.go:177] * [auto-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:17:37.441695  137683 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:17:37.441703  137683 notify.go:220] Checking for updates...
	I1028 13:17:37.443912  137683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:17:37.445093  137683 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:17:37.446215  137683 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:37.447256  137683 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:17:37.448379  137683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:17:37.450071  137683 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:37.450166  137683 config.go:182] Loaded profile config "embed-certs-818470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:37.450253  137683 config.go:182] Loaded profile config "no-preload-702694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:37.450318  137683 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:17:37.485985  137683 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 13:17:37.487120  137683 start.go:297] selected driver: kvm2
	I1028 13:17:37.487135  137683 start.go:901] validating driver "kvm2" against <nil>
	I1028 13:17:37.487149  137683 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:17:37.487907  137683 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:17:37.487992  137683 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:17:37.503928  137683 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:17:37.503982  137683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 13:17:37.504277  137683 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:17:37.504315  137683 cni.go:84] Creating CNI manager for ""
	I1028 13:17:37.504384  137683 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:17:37.504393  137683 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 13:17:37.504451  137683 start.go:340] cluster config:
	{Name:auto-297280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:17:37.504538  137683 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:17:37.506173  137683 out.go:177] * Starting "auto-297280" primary control-plane node in "auto-297280" cluster
	I1028 13:17:37.507328  137683 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:17:37.507373  137683 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:17:37.507383  137683 cache.go:56] Caching tarball of preloaded images
	I1028 13:17:37.507459  137683 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:17:37.507469  137683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:17:37.507580  137683 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/config.json ...
	I1028 13:17:37.507601  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/config.json: {Name:mk77d6c71ea2fa4cbde247604455e751f0622267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:17:37.507749  137683 start.go:360] acquireMachinesLock for auto-297280: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:17:37.507780  137683 start.go:364] duration metric: took 17.378µs to acquireMachinesLock for "auto-297280"
	I1028 13:17:37.507796  137683 start.go:93] Provisioning new machine with config: &{Name:auto-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:auto-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:17:37.507866  137683 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 13:17:34.520042  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:37.021055  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:37.509424  137683 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 13:17:37.509578  137683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:17:37.509623  137683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:17:37.525298  137683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I1028 13:17:37.525718  137683 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:17:37.526296  137683 main.go:141] libmachine: Using API Version  1
	I1028 13:17:37.526321  137683 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:17:37.526637  137683 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:17:37.526817  137683 main.go:141] libmachine: (auto-297280) Calling .GetMachineName
	I1028 13:17:37.526953  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:37.527128  137683 start.go:159] libmachine.API.Create for "auto-297280" (driver="kvm2")
	I1028 13:17:37.527157  137683 client.go:168] LocalClient.Create starting
	I1028 13:17:37.527186  137683 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 13:17:37.527229  137683 main.go:141] libmachine: Decoding PEM data...
	I1028 13:17:37.527252  137683 main.go:141] libmachine: Parsing certificate...
	I1028 13:17:37.527333  137683 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 13:17:37.527364  137683 main.go:141] libmachine: Decoding PEM data...
	I1028 13:17:37.527385  137683 main.go:141] libmachine: Parsing certificate...
	I1028 13:17:37.527424  137683 main.go:141] libmachine: Running pre-create checks...
	I1028 13:17:37.527439  137683 main.go:141] libmachine: (auto-297280) Calling .PreCreateCheck
	I1028 13:17:37.527836  137683 main.go:141] libmachine: (auto-297280) Calling .GetConfigRaw
	I1028 13:17:37.528223  137683 main.go:141] libmachine: Creating machine...
	I1028 13:17:37.528237  137683 main.go:141] libmachine: (auto-297280) Calling .Create
	I1028 13:17:37.528370  137683 main.go:141] libmachine: (auto-297280) Creating KVM machine...
	I1028 13:17:37.529537  137683 main.go:141] libmachine: (auto-297280) DBG | found existing default KVM network
	I1028 13:17:37.531097  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:37.530974  137705 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211810}
	I1028 13:17:37.531127  137683 main.go:141] libmachine: (auto-297280) DBG | created network xml: 
	I1028 13:17:37.531140  137683 main.go:141] libmachine: (auto-297280) DBG | <network>
	I1028 13:17:37.531151  137683 main.go:141] libmachine: (auto-297280) DBG |   <name>mk-auto-297280</name>
	I1028 13:17:37.531158  137683 main.go:141] libmachine: (auto-297280) DBG |   <dns enable='no'/>
	I1028 13:17:37.531169  137683 main.go:141] libmachine: (auto-297280) DBG |   
	I1028 13:17:37.531182  137683 main.go:141] libmachine: (auto-297280) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 13:17:37.531194  137683 main.go:141] libmachine: (auto-297280) DBG |     <dhcp>
	I1028 13:17:37.531205  137683 main.go:141] libmachine: (auto-297280) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 13:17:37.531280  137683 main.go:141] libmachine: (auto-297280) DBG |     </dhcp>
	I1028 13:17:37.531314  137683 main.go:141] libmachine: (auto-297280) DBG |   </ip>
	I1028 13:17:37.531331  137683 main.go:141] libmachine: (auto-297280) DBG |   
	I1028 13:17:37.531343  137683 main.go:141] libmachine: (auto-297280) DBG | </network>
	I1028 13:17:37.531357  137683 main.go:141] libmachine: (auto-297280) DBG | 
	I1028 13:17:37.535930  137683 main.go:141] libmachine: (auto-297280) DBG | trying to create private KVM network mk-auto-297280 192.168.39.0/24...
	I1028 13:17:37.608974  137683 main.go:141] libmachine: (auto-297280) DBG | private KVM network mk-auto-297280 192.168.39.0/24 created
	I1028 13:17:37.609005  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:37.608941  137705 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:37.609015  137683 main.go:141] libmachine: (auto-297280) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280 ...
	I1028 13:17:37.609047  137683 main.go:141] libmachine: (auto-297280) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 13:17:37.609115  137683 main.go:141] libmachine: (auto-297280) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 13:17:37.867996  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:37.867856  137705 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa...
	I1028 13:17:38.027287  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:38.027184  137705 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/auto-297280.rawdisk...
	I1028 13:17:38.027318  137683 main.go:141] libmachine: (auto-297280) DBG | Writing magic tar header
	I1028 13:17:38.027331  137683 main.go:141] libmachine: (auto-297280) DBG | Writing SSH key tar header
	I1028 13:17:38.027340  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:38.027312  137705 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280 ...
	I1028 13:17:38.027469  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280
	I1028 13:17:38.027502  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 13:17:38.027514  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280 (perms=drwx------)
	I1028 13:17:38.027526  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 13:17:38.027536  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 13:17:38.027552  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 13:17:38.027563  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 13:17:38.027576  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:38.027590  137683 main.go:141] libmachine: (auto-297280) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 13:17:38.027599  137683 main.go:141] libmachine: (auto-297280) Creating domain...
	I1028 13:17:38.027607  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 13:17:38.027621  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 13:17:38.027658  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home/jenkins
	I1028 13:17:38.027671  137683 main.go:141] libmachine: (auto-297280) DBG | Checking permissions on dir: /home
	I1028 13:17:38.027678  137683 main.go:141] libmachine: (auto-297280) DBG | Skipping /home - not owner
	I1028 13:17:38.028736  137683 main.go:141] libmachine: (auto-297280) define libvirt domain using xml: 
	I1028 13:17:38.028758  137683 main.go:141] libmachine: (auto-297280) <domain type='kvm'>
	I1028 13:17:38.028770  137683 main.go:141] libmachine: (auto-297280)   <name>auto-297280</name>
	I1028 13:17:38.028780  137683 main.go:141] libmachine: (auto-297280)   <memory unit='MiB'>3072</memory>
	I1028 13:17:38.028790  137683 main.go:141] libmachine: (auto-297280)   <vcpu>2</vcpu>
	I1028 13:17:38.028805  137683 main.go:141] libmachine: (auto-297280)   <features>
	I1028 13:17:38.028817  137683 main.go:141] libmachine: (auto-297280)     <acpi/>
	I1028 13:17:38.028828  137683 main.go:141] libmachine: (auto-297280)     <apic/>
	I1028 13:17:38.028838  137683 main.go:141] libmachine: (auto-297280)     <pae/>
	I1028 13:17:38.028856  137683 main.go:141] libmachine: (auto-297280)     
	I1028 13:17:38.028867  137683 main.go:141] libmachine: (auto-297280)   </features>
	I1028 13:17:38.028875  137683 main.go:141] libmachine: (auto-297280)   <cpu mode='host-passthrough'>
	I1028 13:17:38.028938  137683 main.go:141] libmachine: (auto-297280)   
	I1028 13:17:38.028962  137683 main.go:141] libmachine: (auto-297280)   </cpu>
	I1028 13:17:38.028990  137683 main.go:141] libmachine: (auto-297280)   <os>
	I1028 13:17:38.029012  137683 main.go:141] libmachine: (auto-297280)     <type>hvm</type>
	I1028 13:17:38.029029  137683 main.go:141] libmachine: (auto-297280)     <boot dev='cdrom'/>
	I1028 13:17:38.029046  137683 main.go:141] libmachine: (auto-297280)     <boot dev='hd'/>
	I1028 13:17:38.029058  137683 main.go:141] libmachine: (auto-297280)     <bootmenu enable='no'/>
	I1028 13:17:38.029067  137683 main.go:141] libmachine: (auto-297280)   </os>
	I1028 13:17:38.029091  137683 main.go:141] libmachine: (auto-297280)   <devices>
	I1028 13:17:38.029106  137683 main.go:141] libmachine: (auto-297280)     <disk type='file' device='cdrom'>
	I1028 13:17:38.029122  137683 main.go:141] libmachine: (auto-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/boot2docker.iso'/>
	I1028 13:17:38.029132  137683 main.go:141] libmachine: (auto-297280)       <target dev='hdc' bus='scsi'/>
	I1028 13:17:38.029143  137683 main.go:141] libmachine: (auto-297280)       <readonly/>
	I1028 13:17:38.029152  137683 main.go:141] libmachine: (auto-297280)     </disk>
	I1028 13:17:38.029164  137683 main.go:141] libmachine: (auto-297280)     <disk type='file' device='disk'>
	I1028 13:17:38.029176  137683 main.go:141] libmachine: (auto-297280)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 13:17:38.029200  137683 main.go:141] libmachine: (auto-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/auto-297280.rawdisk'/>
	I1028 13:17:38.029215  137683 main.go:141] libmachine: (auto-297280)       <target dev='hda' bus='virtio'/>
	I1028 13:17:38.029223  137683 main.go:141] libmachine: (auto-297280)     </disk>
	I1028 13:17:38.029229  137683 main.go:141] libmachine: (auto-297280)     <interface type='network'>
	I1028 13:17:38.029235  137683 main.go:141] libmachine: (auto-297280)       <source network='mk-auto-297280'/>
	I1028 13:17:38.029240  137683 main.go:141] libmachine: (auto-297280)       <model type='virtio'/>
	I1028 13:17:38.029246  137683 main.go:141] libmachine: (auto-297280)     </interface>
	I1028 13:17:38.029251  137683 main.go:141] libmachine: (auto-297280)     <interface type='network'>
	I1028 13:17:38.029258  137683 main.go:141] libmachine: (auto-297280)       <source network='default'/>
	I1028 13:17:38.029263  137683 main.go:141] libmachine: (auto-297280)       <model type='virtio'/>
	I1028 13:17:38.029267  137683 main.go:141] libmachine: (auto-297280)     </interface>
	I1028 13:17:38.029272  137683 main.go:141] libmachine: (auto-297280)     <serial type='pty'>
	I1028 13:17:38.029281  137683 main.go:141] libmachine: (auto-297280)       <target port='0'/>
	I1028 13:17:38.029286  137683 main.go:141] libmachine: (auto-297280)     </serial>
	I1028 13:17:38.029294  137683 main.go:141] libmachine: (auto-297280)     <console type='pty'>
	I1028 13:17:38.029299  137683 main.go:141] libmachine: (auto-297280)       <target type='serial' port='0'/>
	I1028 13:17:38.029308  137683 main.go:141] libmachine: (auto-297280)     </console>
	I1028 13:17:38.029320  137683 main.go:141] libmachine: (auto-297280)     <rng model='virtio'>
	I1028 13:17:38.029330  137683 main.go:141] libmachine: (auto-297280)       <backend model='random'>/dev/random</backend>
	I1028 13:17:38.029335  137683 main.go:141] libmachine: (auto-297280)     </rng>
	I1028 13:17:38.029340  137683 main.go:141] libmachine: (auto-297280)     
	I1028 13:17:38.029356  137683 main.go:141] libmachine: (auto-297280)     
	I1028 13:17:38.029369  137683 main.go:141] libmachine: (auto-297280)   </devices>
	I1028 13:17:38.029381  137683 main.go:141] libmachine: (auto-297280) </domain>
	I1028 13:17:38.029388  137683 main.go:141] libmachine: (auto-297280) 
	I1028 13:17:38.033512  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:b3:16:22 in network default
	I1028 13:17:38.034091  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:38.034110  137683 main.go:141] libmachine: (auto-297280) Ensuring networks are active...
	I1028 13:17:38.034790  137683 main.go:141] libmachine: (auto-297280) Ensuring network default is active
	I1028 13:17:38.035085  137683 main.go:141] libmachine: (auto-297280) Ensuring network mk-auto-297280 is active
	I1028 13:17:38.035587  137683 main.go:141] libmachine: (auto-297280) Getting domain xml...
	I1028 13:17:38.036260  137683 main.go:141] libmachine: (auto-297280) Creating domain...
	I1028 13:17:39.265726  137683 main.go:141] libmachine: (auto-297280) Waiting to get IP...
	I1028 13:17:39.266743  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:39.267150  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:39.267227  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:39.267143  137705 retry.go:31] will retry after 194.160052ms: waiting for machine to come up
	I1028 13:17:39.462890  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:39.463521  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:39.463701  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:39.463599  137705 retry.go:31] will retry after 358.725016ms: waiting for machine to come up
	I1028 13:17:39.824410  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:39.825000  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:39.825031  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:39.824954  137705 retry.go:31] will retry after 441.097211ms: waiting for machine to come up
	I1028 13:17:40.267776  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:40.268374  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:40.268403  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:40.268306  137705 retry.go:31] will retry after 550.134777ms: waiting for machine to come up
	I1028 13:17:40.819807  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:40.820433  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:40.820457  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:40.820385  137705 retry.go:31] will retry after 576.328502ms: waiting for machine to come up
	I1028 13:17:41.398009  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:41.398541  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:41.398574  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:41.398479  137705 retry.go:31] will retry after 909.693779ms: waiting for machine to come up
	I1028 13:17:42.309236  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:42.309728  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:42.309769  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:42.309686  137705 retry.go:31] will retry after 1.126064773s: waiting for machine to come up
	I1028 13:17:39.021437  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:41.519909  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.846327651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463846301873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f5c7b95-3181-4896-953c-eda3e6df265f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.846798796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73d354fa-8ce2-4d98-b18e-6de80cb8389f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.846847926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73d354fa-8ce2-4d98-b18e-6de80cb8389f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.847022182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73d354fa-8ce2-4d98-b18e-6de80cb8389f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.886819281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39486fe5-481f-4ded-9f59-f9db4f0cb784 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.886907830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39486fe5-481f-4ded-9f59-f9db4f0cb784 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.888127898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f958a8a8-a6c1-4727-847f-7caab5f2043c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.889708656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463889670254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f958a8a8-a6c1-4727-847f-7caab5f2043c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.890340598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48f5317c-b550-41c1-802c-6d255431eb8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.890413146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48f5317c-b550-41c1-802c-6d255431eb8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.890701860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48f5317c-b550-41c1-802c-6d255431eb8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.925728120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca8e0dee-880e-4772-bf75-2831202e11db name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.925820724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca8e0dee-880e-4772-bf75-2831202e11db name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.926968990Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=993656d3-3972-4f93-b91d-2a0a373a2f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.927296545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463927275001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=993656d3-3972-4f93-b91d-2a0a373a2f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.927747418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a111271e-e882-45f5-a296-8e4ebd591c49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.927794429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a111271e-e882-45f5-a296-8e4ebd591c49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.928659507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a111271e-e882-45f5-a296-8e4ebd591c49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.960419906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0148e47d-f410-414a-8969-faebc6557a61 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.960740762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0148e47d-f410-414a-8969-faebc6557a61 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.961728345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4622395-a51f-44e7-85e6-6a6e15394419 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.962151038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463962130009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4622395-a51f-44e7-85e6-6a6e15394419 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.962706149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=912edcec-7d91-4d0d-b5ef-62c28a84a38c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.962769903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=912edcec-7d91-4d0d-b5ef-62c28a84a38c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:17:43 no-preload-702694 crio[705]: time="2024-10-28 13:17:43.962948717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120256794094440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc02ea7494c68fc8ea331488e25e6840b71abb3be805c1b49604c47e169923b0,PodSandboxId:28cddba48cd1f51074b4335e5bf2dd430052d2d06c3f5a752439242e3bfbf087,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730120235860406039,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f9a11ba-2e9c-4423-8d11-bb22717f8088,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c,PodSandboxId:3a7ae35ca1eb4fa593a399a5a667f2beaa942134f836446de11fe5fdc5f8cd97,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120233675274129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ztw6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8345274a-f93b-4b2f-b8db-8c1578d16f76,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b,PodSandboxId:2b12c00d44d600b39f84ce6a16ba47b197f1c5131d17faebad9f81f7e1728345,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730120226061591117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
58d1786-9ffd-47d6-9da4-ff5bb7740cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067,PodSandboxId:c844f99dd5f3602377cabd3fb90769e1eb88135dc415352e0a70eef30c0756ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120226017192308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ws2ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8e2076-9bfb-4d1c-9e75-88978f59f9
24,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5,PodSandboxId:d2477c2476eb0df453b498c28bf9ab765a0bd8421acb1efe804b89e3db62e145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120221251148544,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d1f0528f052efe0d795084ed5f2ece,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001,PodSandboxId:3937c011b5fd3ad1a4e8b0f5e9b02141cd3632c64a6895bc811b1db0f9773333,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120221221814247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fb50e81343d57d19f9c2247fd0c70ae,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a,PodSandboxId:30c481cce3d2a413e09f43f038d2ef79ee4c71283ecb068399e7792a1fa7fc02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120221270588006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50241634af590d1b9d375eb08aa29911,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7
d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2,PodSandboxId:652e01b20a3575e975617b61425b3fe567f1926f7114c61743e4e7875cc0c61d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120221207059789,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-702694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e7f8843f335f89a2de17b6723f3ca0f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=912edcec-7d91-4d0d-b5ef-62c28a84a38c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4540b20ac011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2b12c00d44d60       storage-provisioner
	fc02ea7494c68       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   28cddba48cd1f       busybox
	b1cc4ab88fe8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   3a7ae35ca1eb4       coredns-7c65d6cfc9-ztw6s
	cf0d300afa265       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2b12c00d44d60       storage-provisioner
	f23435879fbc7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   c844f99dd5f36       kube-proxy-ws2ns
	b597acbba8b05       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   30c481cce3d2a       etcd-no-preload-702694
	031a54940b19d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      20 minutes ago      Running             kube-scheduler            1                   d2477c2476eb0       kube-scheduler-no-preload-702694
	913422942e8f4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      20 minutes ago      Running             kube-apiserver            1                   3937c011b5fd3       kube-apiserver-no-preload-702694
	b4ee30ee5800b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      20 minutes ago      Running             kube-controller-manager   1                   652e01b20a357       kube-controller-manager-no-preload-702694
	
	
	==> coredns [b1cc4ab88fe8d03797834a582ba06e57ade55e99c3ecc5f47915e76e1417954c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43065 - 18476 "HINFO IN 5657511228394046735.7229522385264883326. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049111649s
	
	
	==> describe nodes <==
	Name:               no-preload-702694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-702694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=no-preload-702694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_48_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:48:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-702694
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:12:53 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:12:53 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:12:53 +0000   Mon, 28 Oct 2024 12:48:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:12:53 +0000   Mon, 28 Oct 2024 12:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.192
	  Hostname:    no-preload-702694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5dc07cb00a34b8a9d518d396a7c1405
	  System UUID:                f5dc07cb-00a3-4b8a-9d51-8d396a7c1405
	  Boot ID:                    004ac86a-5cea-4f2c-bfdb-1d8a65990f6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-ztw6s                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-no-preload-702694                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-702694             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-702694    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-ws2ns                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-no-preload-702694             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-wxm6t              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-702694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-702694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-702694 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-702694 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-702694 event: Registered Node no-preload-702694 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-702694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-702694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-702694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-702694 event: Registered Node no-preload-702694 in Controller
	
	
	==> dmesg <==
	[Oct28 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050414] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036716] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.767675] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.926688] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.511740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.143412] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.061050] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050546] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.198820] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.122006] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.255246] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +14.866848] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.059266] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.587341] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[Oct28 12:57] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.975654] systemd-fstab-generator[2063]: Ignoring "noauto" option for root device
	[  +3.718250] kauditd_printk_skb: 58 callbacks suppressed
	[ +25.181303] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b597acbba8b05137ebf06a752db843c053e17e004da5baa065cc7517957b066a] <==
	{"level":"warn","ts":"2024-10-28T13:16:31.418629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.393571ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16153447399271172728 > lease_revoke:<id:602c92d333840a1b>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T13:16:31.418714Z","caller":"traceutil/trace.go:171","msg":"trace[1435027071] linearizableReadLoop","detail":"{readStateIndex:1837; appliedIndex:1836; }","duration":"312.859108ms","start":"2024-10-28T13:16:31.105838Z","end":"2024-10-28T13:16:31.418697Z","steps":["trace[1435027071] 'read index received'  (duration: 51.320413ms)","trace[1435027071] 'applied index is now lower than readState.Index'  (duration: 261.537698ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:16:31.418922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.067458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:16:31.418972Z","caller":"traceutil/trace.go:171","msg":"trace[1415676917] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1562; }","duration":"313.138377ms","start":"2024-10-28T13:16:31.105825Z","end":"2024-10-28T13:16:31.418963Z","steps":["trace[1415676917] 'agreement among raft nodes before linearized reading'  (duration: 313.053955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:16:31.419036Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:16:31.105794Z","time spent":"313.223248ms","remote":"127.0.0.1:45376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":28,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T13:16:31.419259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.47122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:16:31.419305Z","caller":"traceutil/trace.go:171","msg":"trace[1486072054] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1562; }","duration":"287.519857ms","start":"2024-10-28T13:16:31.131774Z","end":"2024-10-28T13:16:31.419294Z","steps":["trace[1486072054] 'agreement among raft nodes before linearized reading'  (duration: 287.445159ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:16:31.876933Z","caller":"traceutil/trace.go:171","msg":"trace[2132568452] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"183.730659ms","start":"2024-10-28T13:16:31.693183Z","end":"2024-10-28T13:16:31.876914Z","steps":["trace[2132568452] 'process raft request'  (duration: 183.574829ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:17:02.923458Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1346}
	{"level":"info","ts":"2024-10-28T13:17:02.927323Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1346,"took":"3.346698ms","hash":909142335,"current-db-size-bytes":2768896,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-28T13:17:02.927398Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":909142335,"revision":1346,"compact-revision":1101}
	{"level":"info","ts":"2024-10-28T13:17:24.742459Z","caller":"traceutil/trace.go:171","msg":"trace[74401436] linearizableReadLoop","detail":"{readStateIndex:1893; appliedIndex:1892; }","duration":"381.568447ms","start":"2024-10-28T13:17:24.360867Z","end":"2024-10-28T13:17:24.742435Z","steps":["trace[74401436] 'read index received'  (duration: 381.381284ms)","trace[74401436] 'applied index is now lower than readState.Index'  (duration: 186.689µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T13:17:24.742587Z","caller":"traceutil/trace.go:171","msg":"trace[599335803] transaction","detail":"{read_only:false; response_revision:1606; number_of_response:1; }","duration":"618.090566ms","start":"2024-10-28T13:17:24.124483Z","end":"2024-10-28T13:17:24.742574Z","steps":["trace[599335803] 'process raft request'  (duration: 617.821925ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:24.742693Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:17:24.124468Z","time spent":"618.139753ms","remote":"127.0.0.1:45302","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1605 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-28T13:17:24.742797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.932454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:17:24.742859Z","caller":"traceutil/trace.go:171","msg":"trace[395876039] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1606; }","duration":"381.994712ms","start":"2024-10-28T13:17:24.360857Z","end":"2024-10-28T13:17:24.742851Z","steps":["trace[395876039] 'agreement among raft nodes before linearized reading'  (duration: 381.913749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:24.742881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:17:24.360813Z","time spent":"382.06145ms","remote":"127.0.0.1:45560","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":8,"response size":30,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-10-28T13:17:24.743006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.896341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:17:24.743066Z","caller":"traceutil/trace.go:171","msg":"trace[1471191505] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1606; }","duration":"250.954583ms","start":"2024-10-28T13:17:24.492103Z","end":"2024-10-28T13:17:24.743058Z","steps":["trace[1471191505] 'agreement among raft nodes before linearized reading'  (duration: 250.886139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:24.743199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.035429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:17:24.743231Z","caller":"traceutil/trace.go:171","msg":"trace[210245186] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1606; }","duration":"241.07114ms","start":"2024-10-28T13:17:24.502154Z","end":"2024-10-28T13:17:24.743225Z","steps":["trace[210245186] 'agreement among raft nodes before linearized reading'  (duration: 241.024004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:24.743185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.38841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:17:24.743299Z","caller":"traceutil/trace.go:171","msg":"trace[755050644] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1606; }","duration":"151.502495ms","start":"2024-10-28T13:17:24.591787Z","end":"2024-10-28T13:17:24.743289Z","steps":["trace[755050644] 'agreement among raft nodes before linearized reading'  (duration: 151.370807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:25.192853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.972732ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:17:25.192909Z","caller":"traceutil/trace.go:171","msg":"trace[1452626675] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1606; }","duration":"304.037156ms","start":"2024-10-28T13:17:24.888861Z","end":"2024-10-28T13:17:25.192898Z","steps":["trace[1452626675] 'range keys from in-memory index tree'  (duration: 303.917405ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:17:44 up 21 min,  0 users,  load average: 0.03, 0.13, 0.11
	Linux no-preload-702694 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [913422942e8f4a8217939257002c2b505a36965d1338dd7cace649acc364a001] <==
	I1028 13:13:05.582863       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:13:05.582892       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:15:05.583012       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:15:05.583408       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:15:05.583474       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:15:05.583613       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:15:05.584812       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:15:05.584897       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:17:04.583341       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:17:04.583482       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:17:05.586176       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:17:05.586286       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:17:05.586454       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:17:05.586642       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:17:05.587407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:17:05.588016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4ee30ee5800b27b4a4389b4227cde47fb447d4de1d9cd6bb7ccfed1063598c2] <==
	E1028 13:12:38.222954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:12:38.713177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:12:53.385902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-702694"
	E1028 13:13:08.229152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:13:08.721909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:13:22.627963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="198.71µs"
	I1028 13:13:33.626924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.539µs"
	E1028 13:13:38.235224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:13:38.729172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:14:08.241891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:14:08.736246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:14:38.249046       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:14:38.743590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:15:08.255902       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:15:08.750391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:15:38.261651       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:15:38.758616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:16:08.267584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:16:08.767476       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:16:38.274641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:16:38.775550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:17:08.281380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:17:08.783036       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:17:38.286827       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:17:38.790382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f23435879fbc78801d78b0b8c22e77132019bb4134d5fa64ef5b2e1f48914067] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:57:06.316363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:57:06.334056       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.192"]
	E1028 12:57:06.358637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:57:06.450313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:57:06.450379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:57:06.450428       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:57:06.453872       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:57:06.454264       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:57:06.454304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:57:06.456048       1 config.go:199] "Starting service config controller"
	I1028 12:57:06.456076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:57:06.456094       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:57:06.456098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:57:06.456426       1 config.go:328] "Starting node config controller"
	I1028 12:57:06.456451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:57:06.557862       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:57:06.557921       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:57:06.557981       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [031a54940b19df7d0054c21ab018c4bf8469d590b6c87053d5dd54eb88a17bd5] <==
	I1028 12:57:02.327133       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:57:04.514071       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:57:04.514186       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:57:04.514202       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:57:04.514211       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:57:04.596789       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:57:04.596844       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:57:04.613008       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:57:04.615857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:57:04.615948       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:57:04.616223       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:57:04.717219       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:16:39 no-preload-702694 kubelet[1426]: E1028 13:16:39.612918    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:16:40 no-preload-702694 kubelet[1426]: E1028 13:16:40.841774    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121400841400166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:16:40 no-preload-702694 kubelet[1426]: E1028 13:16:40.842029    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121400841400166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:16:50 no-preload-702694 kubelet[1426]: E1028 13:16:50.844171    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121410843661217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:16:50 no-preload-702694 kubelet[1426]: E1028 13:16:50.844226    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121410843661217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:16:53 no-preload-702694 kubelet[1426]: E1028 13:16:53.613102    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]: E1028 13:17:00.627128    1426 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]: E1028 13:17:00.846114    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121420845803923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:00 no-preload-702694 kubelet[1426]: E1028 13:17:00.846153    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121420845803923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:04 no-preload-702694 kubelet[1426]: E1028 13:17:04.613920    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:17:10 no-preload-702694 kubelet[1426]: E1028 13:17:10.848245    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121430847725705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:10 no-preload-702694 kubelet[1426]: E1028 13:17:10.848291    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121430847725705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:17 no-preload-702694 kubelet[1426]: E1028 13:17:17.612786    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:17:20 no-preload-702694 kubelet[1426]: E1028 13:17:20.850216    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121440849823559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:20 no-preload-702694 kubelet[1426]: E1028 13:17:20.850254    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121440849823559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:28 no-preload-702694 kubelet[1426]: E1028 13:17:28.614443    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:17:30 no-preload-702694 kubelet[1426]: E1028 13:17:30.852193    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121450851832672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:30 no-preload-702694 kubelet[1426]: E1028 13:17:30.852262    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121450851832672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:39 no-preload-702694 kubelet[1426]: E1028 13:17:39.613540    1426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wxm6t" podUID="9d18f1f3-dae3-4772-9853-f542f264807b"
	Oct 28 13:17:40 no-preload-702694 kubelet[1426]: E1028 13:17:40.854336    1426 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121460854019976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:40 no-preload-702694 kubelet[1426]: E1028 13:17:40.854367    1426 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121460854019976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [cf0d300afa2651a9b3163d096fe80ee4f9cb3ec0e1ad833f3c3f77b7f1c0e33b] <==
	I1028 12:57:06.216178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 12:57:36.222403       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e4540b20ac0113ce295bd32ca4d98232148532297f3a4b9dc1f1a1a3afc8294f] <==
	I1028 12:57:36.870337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:57:36.884374       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:57:36.884653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:57:54.283683       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:57:54.283997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973!
	I1028 12:57:54.285998       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14346255-47bd-4506-9bb0-91a999062343", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973 became leader
	I1028 12:57:54.385859       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-702694_e9fde75e-36ad-4cb2-bf31-1d0c46962973!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-702694 -n no-preload-702694
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-702694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wxm6t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t: exit status 1 (64.219236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wxm6t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-702694 describe pod metrics-server-6867b74b74-wxm6t: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (430.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818470 -n embed-certs-818470
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:18:06.834055306 +0000 UTC m=+6068.399626750
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-818470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-818470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.742µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-818470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-818470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-818470 logs -n 25: (1.796761008s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC | 28 Oct 24 13:18 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 13:15 UTC | 28 Oct 24 13:15 UTC |
	| start   | -p newest-cni-051506 --memory=2200 --alsologtostderr   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:15 UTC | 28 Oct 24 13:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-051506             | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-051506                  | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-051506 --memory=2200 --alsologtostderr   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:16 UTC | 28 Oct 24 13:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-051506 image list                           | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	| delete  | -p newest-cni-051506                                   | newest-cni-051506            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	| start   | -p auto-297280 --memory=3072                           | auto-297280                  | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC | 28 Oct 24 13:17 UTC |
	| start   | -p kindnet-297280                                      | kindnet-297280               | jenkins | v1.34.0 | 28 Oct 24 13:17 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:17:46
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:17:46.157264  138441 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:17:46.157374  138441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:17:46.157383  138441 out.go:358] Setting ErrFile to fd 2...
	I1028 13:17:46.157387  138441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:17:46.157597  138441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:17:46.158143  138441 out.go:352] Setting JSON to false
	I1028 13:17:46.159044  138441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10816,"bootTime":1730110650,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:17:46.159143  138441 start.go:139] virtualization: kvm guest
	I1028 13:17:46.161379  138441 out.go:177] * [kindnet-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:17:46.162771  138441 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:17:46.162777  138441 notify.go:220] Checking for updates...
	I1028 13:17:46.164159  138441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:17:46.165421  138441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:17:46.166698  138441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:46.168033  138441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:17:46.169308  138441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:17:46.171071  138441 config.go:182] Loaded profile config "auto-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:46.171180  138441 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:46.171258  138441 config.go:182] Loaded profile config "embed-certs-818470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:46.171347  138441 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:17:46.208238  138441 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 13:17:46.209518  138441 start.go:297] selected driver: kvm2
	I1028 13:17:46.209538  138441 start.go:901] validating driver "kvm2" against <nil>
	I1028 13:17:46.209554  138441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:17:46.210618  138441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:17:46.210781  138441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:17:46.227142  138441 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:17:46.227251  138441 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 13:17:46.227698  138441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:17:46.227761  138441 cni.go:84] Creating CNI manager for "kindnet"
	I1028 13:17:46.227772  138441 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 13:17:46.227898  138441 start.go:340] cluster config:
	{Name:kindnet-297280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:17:46.228082  138441 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:17:46.230047  138441 out.go:177] * Starting "kindnet-297280" primary control-plane node in "kindnet-297280" cluster
	I1028 13:17:43.437152  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:43.437630  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:43.437659  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:43.437579  137705 retry.go:31] will retry after 1.159570338s: waiting for machine to come up
	I1028 13:17:44.598900  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:44.599445  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:44.599476  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:44.599388  137705 retry.go:31] will retry after 1.669084981s: waiting for machine to come up
	I1028 13:17:46.271193  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:46.271665  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:46.271688  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:46.271613  137705 retry.go:31] will retry after 2.315025035s: waiting for machine to come up
	I1028 13:17:44.020241  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:46.519963  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:46.231378  138441 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:17:46.231444  138441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:17:46.231464  138441 cache.go:56] Caching tarball of preloaded images
	I1028 13:17:46.231601  138441 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:17:46.231622  138441 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:17:46.231797  138441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/config.json ...
	I1028 13:17:46.231836  138441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/config.json: {Name:mk5154e161f8ecea89801e024ee344e908321d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:17:46.232053  138441 start.go:360] acquireMachinesLock for kindnet-297280: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:17:48.588864  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:48.589743  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:48.589768  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:48.589696  137705 retry.go:31] will retry after 2.231280424s: waiting for machine to come up
	I1028 13:17:50.823983  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:50.824510  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:50.824538  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:50.824458  137705 retry.go:31] will retry after 3.135384619s: waiting for machine to come up
	I1028 13:17:48.520796  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:17:48.520823  134197 pod_ready.go:82] duration metric: took 4m0.007469397s for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	E1028 13:17:48.520837  134197 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 13:17:48.520845  134197 pod_ready.go:39] duration metric: took 4m1.60733576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:17:48.520859  134197 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:17:48.520885  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:17:48.520927  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:17:48.572214  134197 cri.go:89] found id: "c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:48.572254  134197 cri.go:89] found id: ""
	I1028 13:17:48.572272  134197 logs.go:282] 1 containers: [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc]
	I1028 13:17:48.572359  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.577317  134197 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:17:48.577395  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:17:48.625122  134197 cri.go:89] found id: "7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:48.625150  134197 cri.go:89] found id: ""
	I1028 13:17:48.625160  134197 logs.go:282] 1 containers: [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a]
	I1028 13:17:48.625219  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.630267  134197 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:17:48.630342  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:17:48.667828  134197 cri.go:89] found id: "6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:48.667864  134197 cri.go:89] found id: ""
	I1028 13:17:48.667876  134197 logs.go:282] 1 containers: [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8]
	I1028 13:17:48.667940  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.672102  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:17:48.672172  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:17:48.708108  134197 cri.go:89] found id: "11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:48.708133  134197 cri.go:89] found id: ""
	I1028 13:17:48.708143  134197 logs.go:282] 1 containers: [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f]
	I1028 13:17:48.708202  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.712059  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:17:48.712127  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:17:48.749955  134197 cri.go:89] found id: "b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:48.749989  134197 cri.go:89] found id: ""
	I1028 13:17:48.750005  134197 logs.go:282] 1 containers: [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604]
	I1028 13:17:48.750088  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.754087  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:17:48.754153  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:17:48.794451  134197 cri.go:89] found id: "018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:48.794479  134197 cri.go:89] found id: ""
	I1028 13:17:48.794487  134197 logs.go:282] 1 containers: [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835]
	I1028 13:17:48.794545  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.799249  134197 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:17:48.799375  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:17:48.839664  134197 cri.go:89] found id: ""
	I1028 13:17:48.839698  134197 logs.go:282] 0 containers: []
	W1028 13:17:48.839708  134197 logs.go:284] No container was found matching "kindnet"
	I1028 13:17:48.839720  134197 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 13:17:48.839778  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 13:17:48.873996  134197 cri.go:89] found id: "390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:48.874021  134197 cri.go:89] found id: "dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:48.874026  134197 cri.go:89] found id: ""
	I1028 13:17:48.874035  134197 logs.go:282] 2 containers: [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d]
	I1028 13:17:48.874104  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.878308  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:48.881627  134197 logs.go:123] Gathering logs for kubelet ...
	I1028 13:17:48.881652  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:17:48.984950  134197 logs.go:123] Gathering logs for dmesg ...
	I1028 13:17:48.984998  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:17:49.000260  134197 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:17:49.000311  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 13:17:49.132373  134197 logs.go:123] Gathering logs for storage-provisioner [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053] ...
	I1028 13:17:49.132408  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:49.172150  134197 logs.go:123] Gathering logs for storage-provisioner [dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d] ...
	I1028 13:17:49.172185  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:49.212930  134197 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:17:49.212971  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:17:49.728788  134197 logs.go:123] Gathering logs for kube-apiserver [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc] ...
	I1028 13:17:49.728832  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:49.777142  134197 logs.go:123] Gathering logs for etcd [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a] ...
	I1028 13:17:49.777173  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:49.820657  134197 logs.go:123] Gathering logs for coredns [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8] ...
	I1028 13:17:49.820688  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:49.852563  134197 logs.go:123] Gathering logs for kube-scheduler [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f] ...
	I1028 13:17:49.852596  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:49.885279  134197 logs.go:123] Gathering logs for kube-proxy [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604] ...
	I1028 13:17:49.885310  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:49.919042  134197 logs.go:123] Gathering logs for kube-controller-manager [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835] ...
	I1028 13:17:49.919072  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:49.966747  134197 logs.go:123] Gathering logs for container status ...
	I1028 13:17:49.966780  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:17:52.509835  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:17:52.525618  134197 api_server.go:72] duration metric: took 4m13.312350351s to wait for apiserver process to appear ...
	I1028 13:17:52.525644  134197 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:17:52.525683  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:17:52.525733  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:17:52.560148  134197 cri.go:89] found id: "c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:52.560182  134197 cri.go:89] found id: ""
	I1028 13:17:52.560193  134197 logs.go:282] 1 containers: [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc]
	I1028 13:17:52.560259  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.563949  134197 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:17:52.564012  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:17:52.602523  134197 cri.go:89] found id: "7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:52.602557  134197 cri.go:89] found id: ""
	I1028 13:17:52.602568  134197 logs.go:282] 1 containers: [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a]
	I1028 13:17:52.602643  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.606248  134197 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:17:52.606317  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:17:52.638441  134197 cri.go:89] found id: "6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:52.638474  134197 cri.go:89] found id: ""
	I1028 13:17:52.638485  134197 logs.go:282] 1 containers: [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8]
	I1028 13:17:52.638536  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.642137  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:17:52.642200  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:17:52.681416  134197 cri.go:89] found id: "11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:52.681442  134197 cri.go:89] found id: ""
	I1028 13:17:52.681452  134197 logs.go:282] 1 containers: [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f]
	I1028 13:17:52.681508  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.685211  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:17:52.685282  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:17:52.718330  134197 cri.go:89] found id: "b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:52.718354  134197 cri.go:89] found id: ""
	I1028 13:17:52.718363  134197 logs.go:282] 1 containers: [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604]
	I1028 13:17:52.718440  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.722020  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:17:52.722082  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:17:53.961817  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:53.962369  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find current IP address of domain auto-297280 in network mk-auto-297280
	I1028 13:17:53.962395  137683 main.go:141] libmachine: (auto-297280) DBG | I1028 13:17:53.962297  137705 retry.go:31] will retry after 4.062801431s: waiting for machine to come up
	I1028 13:17:52.755132  134197 cri.go:89] found id: "018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:52.755152  134197 cri.go:89] found id: ""
	I1028 13:17:52.755159  134197 logs.go:282] 1 containers: [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835]
	I1028 13:17:52.755213  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.758629  134197 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:17:52.758699  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:17:52.790495  134197 cri.go:89] found id: ""
	I1028 13:17:52.790518  134197 logs.go:282] 0 containers: []
	W1028 13:17:52.790528  134197 logs.go:284] No container was found matching "kindnet"
	I1028 13:17:52.790537  134197 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 13:17:52.790617  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 13:17:52.823491  134197 cri.go:89] found id: "390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:52.823513  134197 cri.go:89] found id: "dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:52.823517  134197 cri.go:89] found id: ""
	I1028 13:17:52.823524  134197 logs.go:282] 2 containers: [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d]
	I1028 13:17:52.823574  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.827085  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:52.830376  134197 logs.go:123] Gathering logs for etcd [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a] ...
	I1028 13:17:52.830403  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:52.869716  134197 logs.go:123] Gathering logs for coredns [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8] ...
	I1028 13:17:52.869751  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:52.902172  134197 logs.go:123] Gathering logs for kube-scheduler [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f] ...
	I1028 13:17:52.902201  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:52.934550  134197 logs.go:123] Gathering logs for storage-provisioner [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053] ...
	I1028 13:17:52.934580  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:52.966628  134197 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:17:52.966656  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:17:53.399492  134197 logs.go:123] Gathering logs for container status ...
	I1028 13:17:53.399546  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:17:53.444971  134197 logs.go:123] Gathering logs for kube-apiserver [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc] ...
	I1028 13:17:53.445003  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:53.482908  134197 logs.go:123] Gathering logs for dmesg ...
	I1028 13:17:53.482936  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:17:53.496574  134197 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:17:53.496597  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 13:17:53.601982  134197 logs.go:123] Gathering logs for kube-proxy [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604] ...
	I1028 13:17:53.602021  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:53.634827  134197 logs.go:123] Gathering logs for kube-controller-manager [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835] ...
	I1028 13:17:53.634860  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:53.691651  134197 logs.go:123] Gathering logs for storage-provisioner [dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d] ...
	I1028 13:17:53.691684  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:53.724202  134197 logs.go:123] Gathering logs for kubelet ...
	I1028 13:17:53.724234  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:17:56.294039  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:17:56.299533  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 200:
	ok
	I1028 13:17:56.300538  134197 api_server.go:141] control plane version: v1.31.2
	I1028 13:17:56.300560  134197 api_server.go:131] duration metric: took 3.77490958s to wait for apiserver health ...
	I1028 13:17:56.300569  134197 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:17:56.300591  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 13:17:56.300645  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 13:17:56.339322  134197 cri.go:89] found id: "c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:56.339349  134197 cri.go:89] found id: ""
	I1028 13:17:56.339360  134197 logs.go:282] 1 containers: [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc]
	I1028 13:17:56.339426  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.343230  134197 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 13:17:56.343282  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 13:17:56.382791  134197 cri.go:89] found id: "7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:56.382817  134197 cri.go:89] found id: ""
	I1028 13:17:56.382825  134197 logs.go:282] 1 containers: [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a]
	I1028 13:17:56.382885  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.387029  134197 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 13:17:56.387111  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 13:17:56.421107  134197 cri.go:89] found id: "6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:56.421131  134197 cri.go:89] found id: ""
	I1028 13:17:56.421141  134197 logs.go:282] 1 containers: [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8]
	I1028 13:17:56.421190  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.425167  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 13:17:56.425243  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 13:17:56.457599  134197 cri.go:89] found id: "11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:56.457621  134197 cri.go:89] found id: ""
	I1028 13:17:56.457631  134197 logs.go:282] 1 containers: [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f]
	I1028 13:17:56.457695  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.461661  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 13:17:56.461715  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 13:17:56.499328  134197 cri.go:89] found id: "b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:56.499351  134197 cri.go:89] found id: ""
	I1028 13:17:56.499361  134197 logs.go:282] 1 containers: [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604]
	I1028 13:17:56.499424  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.502913  134197 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 13:17:56.502975  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 13:17:56.536181  134197 cri.go:89] found id: "018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:56.536201  134197 cri.go:89] found id: ""
	I1028 13:17:56.536209  134197 logs.go:282] 1 containers: [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835]
	I1028 13:17:56.536260  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.540007  134197 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 13:17:56.540075  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 13:17:56.572678  134197 cri.go:89] found id: ""
	I1028 13:17:56.572707  134197 logs.go:282] 0 containers: []
	W1028 13:17:56.572717  134197 logs.go:284] No container was found matching "kindnet"
	I1028 13:17:56.572726  134197 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 13:17:56.572778  134197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 13:17:56.602701  134197 cri.go:89] found id: "390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:56.602720  134197 cri.go:89] found id: "dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:56.602726  134197 cri.go:89] found id: ""
	I1028 13:17:56.602735  134197 logs.go:282] 2 containers: [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d]
	I1028 13:17:56.602793  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.606454  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:17:56.609937  134197 logs.go:123] Gathering logs for kubelet ...
	I1028 13:17:56.609955  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 13:17:56.679312  134197 logs.go:123] Gathering logs for kube-apiserver [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc] ...
	I1028 13:17:56.679349  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc"
	I1028 13:17:56.727414  134197 logs.go:123] Gathering logs for etcd [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a] ...
	I1028 13:17:56.727448  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a"
	I1028 13:17:56.769816  134197 logs.go:123] Gathering logs for kube-scheduler [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f] ...
	I1028 13:17:56.769850  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f"
	I1028 13:17:56.801724  134197 logs.go:123] Gathering logs for kube-proxy [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604] ...
	I1028 13:17:56.801752  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604"
	I1028 13:17:56.833743  134197 logs.go:123] Gathering logs for storage-provisioner [dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d] ...
	I1028 13:17:56.833770  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d"
	I1028 13:17:56.864986  134197 logs.go:123] Gathering logs for container status ...
	I1028 13:17:56.865023  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 13:17:56.906955  134197 logs.go:123] Gathering logs for dmesg ...
	I1028 13:17:56.906990  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 13:17:56.918818  134197 logs.go:123] Gathering logs for describe nodes ...
	I1028 13:17:56.918848  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 13:17:57.017134  134197 logs.go:123] Gathering logs for coredns [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8] ...
	I1028 13:17:57.017175  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8"
	I1028 13:17:57.051270  134197 logs.go:123] Gathering logs for kube-controller-manager [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835] ...
	I1028 13:17:57.051302  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835"
	I1028 13:17:57.100383  134197 logs.go:123] Gathering logs for storage-provisioner [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053] ...
	I1028 13:17:57.100416  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053"
	I1028 13:17:57.137460  134197 logs.go:123] Gathering logs for CRI-O ...
	I1028 13:17:57.137488  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 13:17:59.411831  138441 start.go:364] duration metric: took 13.17971836s to acquireMachinesLock for "kindnet-297280"
	I1028 13:17:59.411894  138441 start.go:93] Provisioning new machine with config: &{Name:kindnet-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:kindnet-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:17:59.412021  138441 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 13:17:59.993939  134197 system_pods.go:59] 8 kube-system pods found
	I1028 13:17:59.993967  134197 system_pods.go:61] "coredns-7c65d6cfc9-x8gvd" [4498824f-7ce1-4167-8701-74cadd3fa83c] Running
	I1028 13:17:59.993973  134197 system_pods.go:61] "etcd-default-k8s-diff-port-783661" [9a8a5a39-b0bb-4144-9e70-98fed2bbc838] Running
	I1028 13:17:59.993979  134197 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-783661" [e221604a-5b54-4755-952d-0c699167f402] Running
	I1028 13:17:59.993982  134197 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-783661" [95e9472e-3c24-4fd8-b79c-949d8cd980da] Running
	I1028 13:17:59.993986  134197 system_pods.go:61] "kube-proxy-ff797" [ed2dce0b-4dc9-406e-a9c3-f91d75fa0876] Running
	I1028 13:17:59.993989  134197 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-783661" [7cab2cef-dacb-4943-9564-a1a625afa198] Running
	I1028 13:17:59.993995  134197 system_pods.go:61] "metrics-server-6867b74b74-rkx62" [31c37fb4-0650-481d-b1e3-4956769450d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 13:17:59.993999  134197 system_pods.go:61] "storage-provisioner" [21a53238-251d-4581-b4c3-3a788545ab0c] Running
	I1028 13:17:59.994008  134197 system_pods.go:74] duration metric: took 3.693432039s to wait for pod list to return data ...
	I1028 13:17:59.994016  134197 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:17:59.996583  134197 default_sa.go:45] found service account: "default"
	I1028 13:17:59.996609  134197 default_sa.go:55] duration metric: took 2.588067ms for default service account to be created ...
	I1028 13:17:59.996619  134197 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:18:00.001662  134197 system_pods.go:86] 8 kube-system pods found
	I1028 13:18:00.001692  134197 system_pods.go:89] "coredns-7c65d6cfc9-x8gvd" [4498824f-7ce1-4167-8701-74cadd3fa83c] Running
	I1028 13:18:00.001702  134197 system_pods.go:89] "etcd-default-k8s-diff-port-783661" [9a8a5a39-b0bb-4144-9e70-98fed2bbc838] Running
	I1028 13:18:00.001709  134197 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-783661" [e221604a-5b54-4755-952d-0c699167f402] Running
	I1028 13:18:00.001716  134197 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-783661" [95e9472e-3c24-4fd8-b79c-949d8cd980da] Running
	I1028 13:18:00.001722  134197 system_pods.go:89] "kube-proxy-ff797" [ed2dce0b-4dc9-406e-a9c3-f91d75fa0876] Running
	I1028 13:18:00.001728  134197 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-783661" [7cab2cef-dacb-4943-9564-a1a625afa198] Running
	I1028 13:18:00.001738  134197 system_pods.go:89] "metrics-server-6867b74b74-rkx62" [31c37fb4-0650-481d-b1e3-4956769450d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 13:18:00.001787  134197 system_pods.go:89] "storage-provisioner" [21a53238-251d-4581-b4c3-3a788545ab0c] Running
	I1028 13:18:00.001806  134197 system_pods.go:126] duration metric: took 5.179707ms to wait for k8s-apps to be running ...
	I1028 13:18:00.001813  134197 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:18:00.001867  134197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:18:00.024213  134197 system_svc.go:56] duration metric: took 22.386891ms WaitForService to wait for kubelet
	I1028 13:18:00.024248  134197 kubeadm.go:582] duration metric: took 4m20.810982393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:18:00.024274  134197 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:18:00.027595  134197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:18:00.027620  134197 node_conditions.go:123] node cpu capacity is 2
	I1028 13:18:00.027658  134197 node_conditions.go:105] duration metric: took 3.377907ms to run NodePressure ...
	I1028 13:18:00.027674  134197 start.go:241] waiting for startup goroutines ...
	I1028 13:18:00.027694  134197 start.go:246] waiting for cluster config update ...
	I1028 13:18:00.027711  134197 start.go:255] writing updated cluster config ...
	I1028 13:18:00.028044  134197 ssh_runner.go:195] Run: rm -f paused
	I1028 13:18:00.089801  134197 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:18:00.091923  134197 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-783661" cluster and "default" namespace by default
	I1028 13:17:58.026591  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.027077  137683 main.go:141] libmachine: (auto-297280) Found IP for machine: 192.168.39.218
	I1028 13:17:58.027108  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has current primary IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.027116  137683 main.go:141] libmachine: (auto-297280) Reserving static IP address...
	I1028 13:17:58.027437  137683 main.go:141] libmachine: (auto-297280) DBG | unable to find host DHCP lease matching {name: "auto-297280", mac: "52:54:00:45:ad:56", ip: "192.168.39.218"} in network mk-auto-297280
	I1028 13:17:58.101225  137683 main.go:141] libmachine: (auto-297280) DBG | Getting to WaitForSSH function...
	I1028 13:17:58.101258  137683 main.go:141] libmachine: (auto-297280) Reserved static IP address: 192.168.39.218
	I1028 13:17:58.101271  137683 main.go:141] libmachine: (auto-297280) Waiting for SSH to be available...
	I1028 13:17:58.103616  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.104102  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.104138  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.104301  137683 main.go:141] libmachine: (auto-297280) DBG | Using SSH client type: external
	I1028 13:17:58.104325  137683 main.go:141] libmachine: (auto-297280) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa (-rw-------)
	I1028 13:17:58.104352  137683 main.go:141] libmachine: (auto-297280) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 13:17:58.104400  137683 main.go:141] libmachine: (auto-297280) DBG | About to run SSH command:
	I1028 13:17:58.104446  137683 main.go:141] libmachine: (auto-297280) DBG | exit 0
	I1028 13:17:58.227413  137683 main.go:141] libmachine: (auto-297280) DBG | SSH cmd err, output: <nil>: 
	I1028 13:17:58.227718  137683 main.go:141] libmachine: (auto-297280) KVM machine creation complete!
	I1028 13:17:58.228035  137683 main.go:141] libmachine: (auto-297280) Calling .GetConfigRaw
	I1028 13:17:58.228653  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:58.228828  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:58.228964  137683 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 13:17:58.228981  137683 main.go:141] libmachine: (auto-297280) Calling .GetState
	I1028 13:17:58.230251  137683 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 13:17:58.230266  137683 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 13:17:58.230273  137683 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 13:17:58.230297  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.232482  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.232862  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.232887  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.233053  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.233230  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.233385  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.233534  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.233736  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:58.233938  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:58.233949  137683 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 13:17:58.334618  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:17:58.334644  137683 main.go:141] libmachine: Detecting the provisioner...
	I1028 13:17:58.334655  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.337442  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.337842  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.337871  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.338011  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.338186  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.338328  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.338504  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.338664  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:58.338829  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:58.338845  137683 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 13:17:58.443945  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 13:17:58.444036  137683 main.go:141] libmachine: found compatible host: buildroot
	I1028 13:17:58.444052  137683 main.go:141] libmachine: Provisioning with buildroot...
	I1028 13:17:58.444062  137683 main.go:141] libmachine: (auto-297280) Calling .GetMachineName
	I1028 13:17:58.444303  137683 buildroot.go:166] provisioning hostname "auto-297280"
	I1028 13:17:58.444330  137683 main.go:141] libmachine: (auto-297280) Calling .GetMachineName
	I1028 13:17:58.444509  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.447382  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.447763  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.447791  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.447924  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.448104  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.448268  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.448435  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.448613  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:58.448830  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:58.448843  137683 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-297280 && echo "auto-297280" | sudo tee /etc/hostname
	I1028 13:17:58.565509  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-297280
	
	I1028 13:17:58.565546  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.568548  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.568925  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.568956  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.569117  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.569298  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.569430  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.569534  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.569660  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:58.569837  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:58.569864  137683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-297280' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-297280/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-297280' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 13:17:58.679738  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:17:58.679787  137683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 13:17:58.679829  137683 buildroot.go:174] setting up certificates
	I1028 13:17:58.679862  137683 provision.go:84] configureAuth start
	I1028 13:17:58.679876  137683 main.go:141] libmachine: (auto-297280) Calling .GetMachineName
	I1028 13:17:58.680155  137683 main.go:141] libmachine: (auto-297280) Calling .GetIP
	I1028 13:17:58.683006  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.683415  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.683444  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.683648  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.685703  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.685998  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.686025  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.686145  137683 provision.go:143] copyHostCerts
	I1028 13:17:58.686222  137683 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 13:17:58.686238  137683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 13:17:58.686313  137683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 13:17:58.686460  137683 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 13:17:58.686471  137683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 13:17:58.686520  137683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 13:17:58.686702  137683 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 13:17:58.686719  137683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 13:17:58.686765  137683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 13:17:58.686857  137683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.auto-297280 san=[127.0.0.1 192.168.39.218 auto-297280 localhost minikube]
	I1028 13:17:58.810977  137683 provision.go:177] copyRemoteCerts
	I1028 13:17:58.811043  137683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 13:17:58.811077  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.813636  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.813939  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.813970  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.814116  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.814314  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.814468  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.814620  137683 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa Username:docker}
	I1028 13:17:58.893523  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 13:17:58.915367  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 13:17:58.935930  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1028 13:17:58.956764  137683 provision.go:87] duration metric: took 276.888586ms to configureAuth
	I1028 13:17:58.956791  137683 buildroot.go:189] setting minikube options for container-runtime
	I1028 13:17:58.956958  137683 config.go:182] Loaded profile config "auto-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:17:58.957028  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:58.959556  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.959887  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:58.959907  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:58.960081  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:58.960253  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.960433  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:58.960570  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:58.960724  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:58.960886  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:58.960903  137683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 13:17:59.183305  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 13:17:59.183362  137683 main.go:141] libmachine: Checking connection to Docker...
	I1028 13:17:59.183374  137683 main.go:141] libmachine: (auto-297280) Calling .GetURL
	I1028 13:17:59.184743  137683 main.go:141] libmachine: (auto-297280) DBG | Using libvirt version 6000000
	I1028 13:17:59.187187  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.187570  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.187606  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.187812  137683 main.go:141] libmachine: Docker is up and running!
	I1028 13:17:59.187829  137683 main.go:141] libmachine: Reticulating splines...
	I1028 13:17:59.187837  137683 client.go:171] duration metric: took 21.660670342s to LocalClient.Create
	I1028 13:17:59.187867  137683 start.go:167] duration metric: took 21.660739135s to libmachine.API.Create "auto-297280"
	I1028 13:17:59.187877  137683 start.go:293] postStartSetup for "auto-297280" (driver="kvm2")
	I1028 13:17:59.187889  137683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 13:17:59.187910  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:59.188177  137683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 13:17:59.188209  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:59.190306  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.190589  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.190613  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.190786  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:59.190987  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:59.191175  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:59.191340  137683 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa Username:docker}
	I1028 13:17:59.268922  137683 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 13:17:59.272580  137683 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 13:17:59.272609  137683 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 13:17:59.272673  137683 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 13:17:59.272748  137683 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 13:17:59.272893  137683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 13:17:59.281274  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:17:59.305799  137683 start.go:296] duration metric: took 117.908479ms for postStartSetup
	I1028 13:17:59.305864  137683 main.go:141] libmachine: (auto-297280) Calling .GetConfigRaw
	I1028 13:17:59.306382  137683 main.go:141] libmachine: (auto-297280) Calling .GetIP
	I1028 13:17:59.308844  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.309212  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.309235  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.309502  137683 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/config.json ...
	I1028 13:17:59.309677  137683 start.go:128] duration metric: took 21.801801202s to createHost
	I1028 13:17:59.309726  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:59.311757  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.312118  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.312146  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.312283  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:59.312457  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:59.312599  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:59.312743  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:59.312899  137683 main.go:141] libmachine: Using SSH client type: native
	I1028 13:17:59.313057  137683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I1028 13:17:59.313067  137683 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 13:17:59.411602  137683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730121479.386125904
	
	I1028 13:17:59.411647  137683 fix.go:216] guest clock: 1730121479.386125904
	I1028 13:17:59.411658  137683 fix.go:229] Guest: 2024-10-28 13:17:59.386125904 +0000 UTC Remote: 2024-10-28 13:17:59.309697351 +0000 UTC m=+21.910922275 (delta=76.428553ms)
	I1028 13:17:59.411715  137683 fix.go:200] guest clock delta is within tolerance: 76.428553ms
	I1028 13:17:59.411727  137683 start.go:83] releasing machines lock for "auto-297280", held for 21.903937068s
	I1028 13:17:59.411768  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:59.412026  137683 main.go:141] libmachine: (auto-297280) Calling .GetIP
	I1028 13:17:59.414945  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.415402  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.415429  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.415578  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:59.416047  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:59.416234  137683 main.go:141] libmachine: (auto-297280) Calling .DriverName
	I1028 13:17:59.416317  137683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 13:17:59.416362  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:59.416454  137683 ssh_runner.go:195] Run: cat /version.json
	I1028 13:17:59.416473  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHHostname
	I1028 13:17:59.419027  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.419229  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.419305  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.419330  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.419524  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:59.419708  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:59.419783  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:17:59.419804  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:17:59.419892  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:59.419961  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHPort
	I1028 13:17:59.420067  137683 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa Username:docker}
	I1028 13:17:59.420113  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHKeyPath
	I1028 13:17:59.420251  137683 main.go:141] libmachine: (auto-297280) Calling .GetSSHUsername
	I1028 13:17:59.420373  137683 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/auto-297280/id_rsa Username:docker}
	I1028 13:17:59.526450  137683 ssh_runner.go:195] Run: systemctl --version
	I1028 13:17:59.532697  137683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 13:17:59.693130  137683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 13:17:59.700623  137683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 13:17:59.700688  137683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 13:17:59.718948  137683 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 13:17:59.718986  137683 start.go:495] detecting cgroup driver to use...
	I1028 13:17:59.719056  137683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 13:17:59.734557  137683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 13:17:59.747472  137683 docker.go:217] disabling cri-docker service (if available) ...
	I1028 13:17:59.747550  137683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 13:17:59.760389  137683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 13:17:59.773901  137683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 13:17:59.893818  137683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 13:18:00.052618  137683 docker.go:233] disabling docker service ...
	I1028 13:18:00.052684  137683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 13:18:00.071340  137683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 13:18:00.084540  137683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 13:18:00.242551  137683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 13:18:00.388719  137683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 13:18:00.402389  137683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 13:18:00.420557  137683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 13:18:00.420613  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.431390  137683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 13:18:00.431445  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.442294  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.452886  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.463429  137683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 13:18:00.475912  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.487677  137683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.505810  137683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:18:00.516009  137683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 13:18:00.525172  137683 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 13:18:00.525230  137683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 13:18:00.536559  137683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 13:18:00.546134  137683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:18:00.688624  137683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 13:18:00.784708  137683 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 13:18:00.784777  137683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 13:18:00.789528  137683 start.go:563] Will wait 60s for crictl version
	I1028 13:18:00.789586  137683 ssh_runner.go:195] Run: which crictl
	I1028 13:18:00.792929  137683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 13:18:00.826768  137683 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 13:18:00.826880  137683 ssh_runner.go:195] Run: crio --version
	I1028 13:18:00.852915  137683 ssh_runner.go:195] Run: crio --version
	I1028 13:18:00.889797  137683 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 13:17:59.414047  138441 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 13:17:59.414231  138441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:17:59.414305  138441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:17:59.430633  138441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42473
	I1028 13:17:59.431115  138441 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:17:59.431692  138441 main.go:141] libmachine: Using API Version  1
	I1028 13:17:59.431714  138441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:17:59.432053  138441 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:17:59.432245  138441 main.go:141] libmachine: (kindnet-297280) Calling .GetMachineName
	I1028 13:17:59.432383  138441 main.go:141] libmachine: (kindnet-297280) Calling .DriverName
	I1028 13:17:59.432542  138441 start.go:159] libmachine.API.Create for "kindnet-297280" (driver="kvm2")
	I1028 13:17:59.432573  138441 client.go:168] LocalClient.Create starting
	I1028 13:17:59.432609  138441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 13:17:59.432653  138441 main.go:141] libmachine: Decoding PEM data...
	I1028 13:17:59.432675  138441 main.go:141] libmachine: Parsing certificate...
	I1028 13:17:59.432742  138441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 13:17:59.432769  138441 main.go:141] libmachine: Decoding PEM data...
	I1028 13:17:59.432798  138441 main.go:141] libmachine: Parsing certificate...
	I1028 13:17:59.432822  138441 main.go:141] libmachine: Running pre-create checks...
	I1028 13:17:59.432838  138441 main.go:141] libmachine: (kindnet-297280) Calling .PreCreateCheck
	I1028 13:17:59.433268  138441 main.go:141] libmachine: (kindnet-297280) Calling .GetConfigRaw
	I1028 13:17:59.433701  138441 main.go:141] libmachine: Creating machine...
	I1028 13:17:59.433720  138441 main.go:141] libmachine: (kindnet-297280) Calling .Create
	I1028 13:17:59.434033  138441 main.go:141] libmachine: (kindnet-297280) Creating KVM machine...
	I1028 13:17:59.435614  138441 main.go:141] libmachine: (kindnet-297280) DBG | found existing default KVM network
	I1028 13:17:59.437370  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.437199  138551 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:a7:4b} reservation:<nil>}
	I1028 13:17:59.438381  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.438290  138551 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:cf:a6} reservation:<nil>}
	I1028 13:17:59.439522  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.439397  138551 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:11:40} reservation:<nil>}
	I1028 13:17:59.440682  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.440605  138551 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002efb70}
	I1028 13:17:59.440707  138441 main.go:141] libmachine: (kindnet-297280) DBG | created network xml: 
	I1028 13:17:59.440719  138441 main.go:141] libmachine: (kindnet-297280) DBG | <network>
	I1028 13:17:59.440730  138441 main.go:141] libmachine: (kindnet-297280) DBG |   <name>mk-kindnet-297280</name>
	I1028 13:17:59.440741  138441 main.go:141] libmachine: (kindnet-297280) DBG |   <dns enable='no'/>
	I1028 13:17:59.440751  138441 main.go:141] libmachine: (kindnet-297280) DBG |   
	I1028 13:17:59.440761  138441 main.go:141] libmachine: (kindnet-297280) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1028 13:17:59.440772  138441 main.go:141] libmachine: (kindnet-297280) DBG |     <dhcp>
	I1028 13:17:59.440778  138441 main.go:141] libmachine: (kindnet-297280) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1028 13:17:59.440785  138441 main.go:141] libmachine: (kindnet-297280) DBG |     </dhcp>
	I1028 13:17:59.440790  138441 main.go:141] libmachine: (kindnet-297280) DBG |   </ip>
	I1028 13:17:59.440793  138441 main.go:141] libmachine: (kindnet-297280) DBG |   
	I1028 13:17:59.440801  138441 main.go:141] libmachine: (kindnet-297280) DBG | </network>
	I1028 13:17:59.440815  138441 main.go:141] libmachine: (kindnet-297280) DBG | 
	I1028 13:17:59.446210  138441 main.go:141] libmachine: (kindnet-297280) DBG | trying to create private KVM network mk-kindnet-297280 192.168.72.0/24...
	I1028 13:17:59.514860  138441 main.go:141] libmachine: (kindnet-297280) DBG | private KVM network mk-kindnet-297280 192.168.72.0/24 created
	I1028 13:17:59.514885  138441 main.go:141] libmachine: (kindnet-297280) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280 ...
	I1028 13:17:59.514899  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.514833  138551 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:59.515009  138441 main.go:141] libmachine: (kindnet-297280) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 13:17:59.515072  138441 main.go:141] libmachine: (kindnet-297280) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 13:17:59.810257  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.810124  138551 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280/id_rsa...
	I1028 13:17:59.874690  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.874571  138551 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280/kindnet-297280.rawdisk...
	I1028 13:17:59.874717  138441 main.go:141] libmachine: (kindnet-297280) DBG | Writing magic tar header
	I1028 13:17:59.874727  138441 main.go:141] libmachine: (kindnet-297280) DBG | Writing SSH key tar header
	I1028 13:17:59.874735  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:17:59.874698  138551 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280 ...
	I1028 13:17:59.874886  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280
	I1028 13:17:59.874915  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280 (perms=drwx------)
	I1028 13:17:59.874923  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 13:17:59.874936  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:17:59.874956  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 13:17:59.874969  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 13:17:59.874987  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 13:17:59.874996  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 13:17:59.875001  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home/jenkins
	I1028 13:17:59.875010  138441 main.go:141] libmachine: (kindnet-297280) DBG | Checking permissions on dir: /home
	I1028 13:17:59.875015  138441 main.go:141] libmachine: (kindnet-297280) DBG | Skipping /home - not owner
	I1028 13:17:59.875029  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 13:17:59.875042  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 13:17:59.875069  138441 main.go:141] libmachine: (kindnet-297280) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 13:17:59.875090  138441 main.go:141] libmachine: (kindnet-297280) Creating domain...
	I1028 13:17:59.876257  138441 main.go:141] libmachine: (kindnet-297280) define libvirt domain using xml: 
	I1028 13:17:59.876274  138441 main.go:141] libmachine: (kindnet-297280) <domain type='kvm'>
	I1028 13:17:59.876283  138441 main.go:141] libmachine: (kindnet-297280)   <name>kindnet-297280</name>
	I1028 13:17:59.876297  138441 main.go:141] libmachine: (kindnet-297280)   <memory unit='MiB'>3072</memory>
	I1028 13:17:59.876306  138441 main.go:141] libmachine: (kindnet-297280)   <vcpu>2</vcpu>
	I1028 13:17:59.876313  138441 main.go:141] libmachine: (kindnet-297280)   <features>
	I1028 13:17:59.876322  138441 main.go:141] libmachine: (kindnet-297280)     <acpi/>
	I1028 13:17:59.876332  138441 main.go:141] libmachine: (kindnet-297280)     <apic/>
	I1028 13:17:59.876339  138441 main.go:141] libmachine: (kindnet-297280)     <pae/>
	I1028 13:17:59.876348  138441 main.go:141] libmachine: (kindnet-297280)     
	I1028 13:17:59.876357  138441 main.go:141] libmachine: (kindnet-297280)   </features>
	I1028 13:17:59.876371  138441 main.go:141] libmachine: (kindnet-297280)   <cpu mode='host-passthrough'>
	I1028 13:17:59.876378  138441 main.go:141] libmachine: (kindnet-297280)   
	I1028 13:17:59.876391  138441 main.go:141] libmachine: (kindnet-297280)   </cpu>
	I1028 13:17:59.876399  138441 main.go:141] libmachine: (kindnet-297280)   <os>
	I1028 13:17:59.876403  138441 main.go:141] libmachine: (kindnet-297280)     <type>hvm</type>
	I1028 13:17:59.876414  138441 main.go:141] libmachine: (kindnet-297280)     <boot dev='cdrom'/>
	I1028 13:17:59.876421  138441 main.go:141] libmachine: (kindnet-297280)     <boot dev='hd'/>
	I1028 13:17:59.876433  138441 main.go:141] libmachine: (kindnet-297280)     <bootmenu enable='no'/>
	I1028 13:17:59.876443  138441 main.go:141] libmachine: (kindnet-297280)   </os>
	I1028 13:17:59.876453  138441 main.go:141] libmachine: (kindnet-297280)   <devices>
	I1028 13:17:59.876461  138441 main.go:141] libmachine: (kindnet-297280)     <disk type='file' device='cdrom'>
	I1028 13:17:59.876482  138441 main.go:141] libmachine: (kindnet-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280/boot2docker.iso'/>
	I1028 13:17:59.876490  138441 main.go:141] libmachine: (kindnet-297280)       <target dev='hdc' bus='scsi'/>
	I1028 13:17:59.876496  138441 main.go:141] libmachine: (kindnet-297280)       <readonly/>
	I1028 13:17:59.876502  138441 main.go:141] libmachine: (kindnet-297280)     </disk>
	I1028 13:17:59.876534  138441 main.go:141] libmachine: (kindnet-297280)     <disk type='file' device='disk'>
	I1028 13:17:59.876556  138441 main.go:141] libmachine: (kindnet-297280)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 13:17:59.876586  138441 main.go:141] libmachine: (kindnet-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/kindnet-297280/kindnet-297280.rawdisk'/>
	I1028 13:17:59.876602  138441 main.go:141] libmachine: (kindnet-297280)       <target dev='hda' bus='virtio'/>
	I1028 13:17:59.876614  138441 main.go:141] libmachine: (kindnet-297280)     </disk>
	I1028 13:17:59.876624  138441 main.go:141] libmachine: (kindnet-297280)     <interface type='network'>
	I1028 13:17:59.876635  138441 main.go:141] libmachine: (kindnet-297280)       <source network='mk-kindnet-297280'/>
	I1028 13:17:59.876644  138441 main.go:141] libmachine: (kindnet-297280)       <model type='virtio'/>
	I1028 13:17:59.876652  138441 main.go:141] libmachine: (kindnet-297280)     </interface>
	I1028 13:17:59.876662  138441 main.go:141] libmachine: (kindnet-297280)     <interface type='network'>
	I1028 13:17:59.876691  138441 main.go:141] libmachine: (kindnet-297280)       <source network='default'/>
	I1028 13:17:59.876711  138441 main.go:141] libmachine: (kindnet-297280)       <model type='virtio'/>
	I1028 13:17:59.876732  138441 main.go:141] libmachine: (kindnet-297280)     </interface>
	I1028 13:17:59.876747  138441 main.go:141] libmachine: (kindnet-297280)     <serial type='pty'>
	I1028 13:17:59.876758  138441 main.go:141] libmachine: (kindnet-297280)       <target port='0'/>
	I1028 13:17:59.876767  138441 main.go:141] libmachine: (kindnet-297280)     </serial>
	I1028 13:17:59.876781  138441 main.go:141] libmachine: (kindnet-297280)     <console type='pty'>
	I1028 13:17:59.876791  138441 main.go:141] libmachine: (kindnet-297280)       <target type='serial' port='0'/>
	I1028 13:17:59.876799  138441 main.go:141] libmachine: (kindnet-297280)     </console>
	I1028 13:17:59.876818  138441 main.go:141] libmachine: (kindnet-297280)     <rng model='virtio'>
	I1028 13:17:59.876833  138441 main.go:141] libmachine: (kindnet-297280)       <backend model='random'>/dev/random</backend>
	I1028 13:17:59.876848  138441 main.go:141] libmachine: (kindnet-297280)     </rng>
	I1028 13:17:59.876857  138441 main.go:141] libmachine: (kindnet-297280)     
	I1028 13:17:59.876866  138441 main.go:141] libmachine: (kindnet-297280)     
	I1028 13:17:59.876873  138441 main.go:141] libmachine: (kindnet-297280)   </devices>
	I1028 13:17:59.876882  138441 main.go:141] libmachine: (kindnet-297280) </domain>
	I1028 13:17:59.876892  138441 main.go:141] libmachine: (kindnet-297280) 
	I1028 13:17:59.881471  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:33:17:50 in network default
	I1028 13:17:59.882015  138441 main.go:141] libmachine: (kindnet-297280) Ensuring networks are active...
	I1028 13:17:59.882035  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:17:59.882695  138441 main.go:141] libmachine: (kindnet-297280) Ensuring network default is active
	I1028 13:17:59.883041  138441 main.go:141] libmachine: (kindnet-297280) Ensuring network mk-kindnet-297280 is active
	I1028 13:17:59.883541  138441 main.go:141] libmachine: (kindnet-297280) Getting domain xml...
	I1028 13:17:59.884381  138441 main.go:141] libmachine: (kindnet-297280) Creating domain...
	I1028 13:18:00.891153  137683 main.go:141] libmachine: (auto-297280) Calling .GetIP
	I1028 13:18:00.894371  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:18:00.894798  137683 main.go:141] libmachine: (auto-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ad:56", ip: ""} in network mk-auto-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:17:51 +0000 UTC Type:0 Mac:52:54:00:45:ad:56 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:auto-297280 Clientid:01:52:54:00:45:ad:56}
	I1028 13:18:00.894823  137683 main.go:141] libmachine: (auto-297280) DBG | domain auto-297280 has defined IP address 192.168.39.218 and MAC address 52:54:00:45:ad:56 in network mk-auto-297280
	I1028 13:18:00.895074  137683 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 13:18:00.900000  137683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:18:00.912622  137683 kubeadm.go:883] updating cluster {Name:auto-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:auto-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 13:18:00.912742  137683 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:18:00.912808  137683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:18:00.944477  137683 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 13:18:00.944551  137683 ssh_runner.go:195] Run: which lz4
	I1028 13:18:00.948491  137683 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 13:18:00.952489  137683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 13:18:00.952518  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 13:18:02.146587  137683 crio.go:462] duration metric: took 1.198116052s to copy over tarball
	I1028 13:18:02.146707  137683 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 13:18:01.307669  138441 main.go:141] libmachine: (kindnet-297280) Waiting to get IP...
	I1028 13:18:01.308613  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:01.309130  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:01.309161  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:01.309138  138551 retry.go:31] will retry after 282.561184ms: waiting for machine to come up
	I1028 13:18:01.593881  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:01.595400  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:01.595433  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:01.595352  138551 retry.go:31] will retry after 269.375507ms: waiting for machine to come up
	I1028 13:18:01.866934  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:01.867437  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:01.867467  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:01.867397  138551 retry.go:31] will retry after 304.943718ms: waiting for machine to come up
	I1028 13:18:02.173991  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:02.174518  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:02.174543  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:02.174480  138551 retry.go:31] will retry after 381.120655ms: waiting for machine to come up
	I1028 13:18:02.556924  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:02.557561  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:02.557590  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:02.557507  138551 retry.go:31] will retry after 671.811523ms: waiting for machine to come up
	I1028 13:18:03.231328  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:03.231846  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:03.231879  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:03.231798  138551 retry.go:31] will retry after 614.68686ms: waiting for machine to come up
	I1028 13:18:03.847957  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:03.848566  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:03.848605  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:03.848518  138551 retry.go:31] will retry after 970.343933ms: waiting for machine to come up
	I1028 13:18:04.821106  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:04.821691  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:04.821724  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:04.821656  138551 retry.go:31] will retry after 1.025520884s: waiting for machine to come up
	I1028 13:18:05.848662  138441 main.go:141] libmachine: (kindnet-297280) DBG | domain kindnet-297280 has defined MAC address 52:54:00:79:ab:f4 in network mk-kindnet-297280
	I1028 13:18:05.849159  138441 main.go:141] libmachine: (kindnet-297280) DBG | unable to find current IP address of domain kindnet-297280 in network mk-kindnet-297280
	I1028 13:18:05.849188  138441 main.go:141] libmachine: (kindnet-297280) DBG | I1028 13:18:05.849118  138551 retry.go:31] will retry after 1.206024552s: waiting for machine to come up
	I1028 13:18:04.412305  137683 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.265562641s)
	I1028 13:18:04.412352  137683 crio.go:469] duration metric: took 2.265732824s to extract the tarball
	I1028 13:18:04.412364  137683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 13:18:04.450862  137683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:18:04.494907  137683 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 13:18:04.494939  137683 cache_images.go:84] Images are preloaded, skipping loading
	I1028 13:18:04.494947  137683 kubeadm.go:934] updating node { 192.168.39.218 8443 v1.31.2 crio true true} ...
	I1028 13:18:04.495044  137683 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-297280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:auto-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 13:18:04.495110  137683 ssh_runner.go:195] Run: crio config
	I1028 13:18:04.539755  137683 cni.go:84] Creating CNI manager for ""
	I1028 13:18:04.539784  137683 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:18:04.539797  137683 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 13:18:04.539829  137683 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-297280 NodeName:auto-297280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 13:18:04.539996  137683 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-297280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 13:18:04.540066  137683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 13:18:04.550538  137683 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 13:18:04.550641  137683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 13:18:04.559290  137683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1028 13:18:04.575490  137683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 13:18:04.590970  137683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I1028 13:18:04.606559  137683 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I1028 13:18:04.610440  137683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:18:04.621800  137683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:18:04.742436  137683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:18:04.759429  137683 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280 for IP: 192.168.39.218
	I1028 13:18:04.759460  137683 certs.go:194] generating shared ca certs ...
	I1028 13:18:04.759483  137683 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:04.759725  137683 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 13:18:04.759808  137683 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 13:18:04.759827  137683 certs.go:256] generating profile certs ...
	I1028 13:18:04.759912  137683 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.key
	I1028 13:18:04.759931  137683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt with IP's: []
	I1028 13:18:05.082458  137683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt ...
	I1028 13:18:05.082490  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: {Name:mk5eb9c5591a68529902371d13cd697cde0c7589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.082689  137683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.key ...
	I1028 13:18:05.082706  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.key: {Name:mkf140e2a80001458a601f725d3b45ec439cae2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.082815  137683 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key.f3b8933b
	I1028 13:18:05.082831  137683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt.f3b8933b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.218]
	I1028 13:18:05.204781  137683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt.f3b8933b ...
	I1028 13:18:05.204811  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt.f3b8933b: {Name:mk9c484fae9aa634984ffe0ecc19a18b832cb37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.205010  137683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key.f3b8933b ...
	I1028 13:18:05.205029  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key.f3b8933b: {Name:mk84eec9e2b42dacd04e6dcff7ddc4a4c55279b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.205135  137683 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt.f3b8933b -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt
	I1028 13:18:05.205228  137683 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key.f3b8933b -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key
	I1028 13:18:05.205283  137683 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.key
	I1028 13:18:05.205298  137683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.crt with IP's: []
	I1028 13:18:05.462991  137683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.crt ...
	I1028 13:18:05.463029  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.crt: {Name:mk647fd1172d54877a0ad61a091cb584bf6a9a7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.463241  137683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.key ...
	I1028 13:18:05.463256  137683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.key: {Name:mk1026af378d4932bb767f03bdce4c0b1e9ae1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:18:05.463482  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 13:18:05.463525  137683 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 13:18:05.463540  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 13:18:05.463586  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 13:18:05.463618  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 13:18:05.463676  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 13:18:05.463731  137683 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:18:05.464389  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 13:18:05.490824  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 13:18:05.513309  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 13:18:05.537644  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 13:18:05.559710  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1028 13:18:05.586383  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 13:18:05.608029  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 13:18:05.628501  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 13:18:05.649820  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 13:18:05.671761  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 13:18:05.695620  137683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 13:18:05.717545  137683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 13:18:05.732328  137683 ssh_runner.go:195] Run: openssl version
	I1028 13:18:05.739475  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 13:18:05.750928  137683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:18:05.755496  137683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:18:05.755570  137683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:18:05.761753  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 13:18:05.773335  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 13:18:05.783682  137683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 13:18:05.788180  137683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 13:18:05.788236  137683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 13:18:05.793531  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 13:18:05.803483  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 13:18:05.813148  137683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 13:18:05.817173  137683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 13:18:05.817222  137683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 13:18:05.824384  137683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 13:18:05.838265  137683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 13:18:05.843169  137683 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 13:18:05.843237  137683 kubeadm.go:392] StartCluster: {Name:auto-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:auto-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:18:05.843337  137683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 13:18:05.843389  137683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:18:05.880705  137683 cri.go:89] found id: ""
	I1028 13:18:05.880783  137683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 13:18:05.891443  137683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:18:05.901559  137683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:18:05.911436  137683 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:18:05.911455  137683 kubeadm.go:157] found existing configuration files:
	
	I1028 13:18:05.911504  137683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 13:18:05.920342  137683 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:18:05.920396  137683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:18:05.930622  137683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 13:18:05.939619  137683 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:18:05.939692  137683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:18:05.948931  137683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 13:18:05.957568  137683 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:18:05.957628  137683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:18:05.967800  137683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 13:18:05.977431  137683 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:18:05.977513  137683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:18:05.987453  137683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 13:18:06.149052  137683 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.057634101Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9a09253ca759b3f26e9776e152c5df3ac5cf2504bf8639304259f01a49d73452,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gch8d,Uid:55392b3f-3144-428f-b8aa-d0a45b9b8116,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120549862405256,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gch8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55392b3f-3144-428f-b8aa-d0a45b9b8116,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:02:29.541444754Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&PodSandboxMetadata{Name:kube-proxy-fnp29,Uid:dbb76c8a-2b11-4081-af16-f10a021c45ef,Name
space:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120549452197950,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:02:27.641369625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4be30d8c-606c-40ed-bef9-1cbb5742b98d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120549427153602,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-
606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T13:02:29.113313156Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qcqc4,Ui
d:31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120548864736234,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:02:28.258017960Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhnvt,Uid:3d624ceb-527a-4a10-9ec9-ded3928c6ba8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120548841717012,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6ba8,k8s-app: kube-dns,pod-templa
te-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:02:28.234302488Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-818470,Uid:32b6e9db89ff9ee68816f2fc25ba251c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120537950745331,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 32b6e9db89ff9ee68816f2fc25ba251c,kubernetes.io/config.seen: 2024-10-28T13:02:17.498161230Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-818470,Uid:f2c079761315b4bc666e1cabcd79204c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120537949623464,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2c079761315b4bc666e1cabcd79204c,kubernetes.io/config.seen: 2024-10-28T13:02:17.498159999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-818470,Uid:dd5482f6e3aee8942026c010be39b794,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1730120537948823511,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.164:8443,kubernetes.io/config.hash: dd5482f6e3aee8942026c010be39b794,kubernetes.io/config.seen: 2024-10-28T13:02:17.498158731Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-818470,Uid:1c85b559c3ce03721f949a025c7449ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730120537924687486,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.164:2379,kubernetes.io/config.hash: 1c85b559c3ce03721f949a025c7449ef,kubernetes.io/config.seen: 2024-10-28T13:02:17.498154384Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-818470,Uid:dd5482f6e3aee8942026c010be39b794,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1730120248722885698,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.164:8443,kubernetes.io/config.hash: dd5482f6e3aee8942026c010be39b794,kubernetes.io/config.seen: 2024-10-28T12:57:28.244522932Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=3e93676e-49e2-469e-9025-502854ee1ade name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.058702031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c877671-fe33-43e8-8965-7ebf98441bcd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.058805407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c877671-fe33-43e8-8965-7ebf98441bcd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.059328675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c877671-fe33-43e8-8965-7ebf98441bcd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.070034548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9674c654-3f14-4833-8541-535d9f842ab0 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.070105789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9674c654-3f14-4833-8541-535d9f842ab0 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.071103441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e84d4d4-b64b-40de-a4a2-ea8702362c5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.071624431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121488071604466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e84d4d4-b64b-40de-a4a2-ea8702362c5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.072605141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d822f60-7088-4001-85f7-f0ec9e40f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.072673591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d822f60-7088-4001-85f7-f0ec9e40f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.072947357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d822f60-7088-4001-85f7-f0ec9e40f9f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.114878892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85457295-5fb7-4a7b-b6c4-c54b44a8510e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.114961698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85457295-5fb7-4a7b-b6c4-c54b44a8510e name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.116880353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26b1ac3b-5946-4ee5-8bca-4d786c1e1429 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.117486306Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121488117457854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26b1ac3b-5946-4ee5-8bca-4d786c1e1429 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.118208395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b373c5c9-77f9-4477-b11d-93aa2eff5d96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.118296000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b373c5c9-77f9-4477-b11d-93aa2eff5d96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.118717898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b373c5c9-77f9-4477-b11d-93aa2eff5d96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.159813311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07431899-7096-4bf1-a4a1-128f70235098 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.159909381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07431899-7096-4bf1-a4a1-128f70235098 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.160856346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6ee0dd2-6967-47a4-9041-435a812eae50 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.161323488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121488161295149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6ee0dd2-6967-47a4-9041-435a812eae50 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.161939674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=922f89db-64eb-4e45-9319-044e9af9c303 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.162073191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=922f89db-64eb-4e45-9319-044e9af9c303 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:18:08 embed-certs-818470 crio[709]: time="2024-10-28 13:18:08.162296741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7,PodSandboxId:eafd327cc40dd4e3316627a3d3949f174f7335d65f15ef6efafa264eaeb14bd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730120549812295911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fnp29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb76c8a-2b11-4081-af16-f10a021c45ef,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677,PodSandboxId:fa512650272eabeb3f10ca1d7ce26abeb2586da295db40ba7ee6df8b78ca6069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730120549667444740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4be30d8c-606c-40ed-bef9-1cbb5742b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb,PodSandboxId:9f9b8bb378fa0985c717a5b5f11aa3856022bddf0dfeafb2d7f6f5d1da9ca398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549384691819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qcqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c781e9-9c9d-4ec5-9f36-53eba2bc05d0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065,PodSandboxId:ce1a1a104c0814e8434e1d24efd0bad0ddf3f8e9638ded9df842b8d24e8eca62,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730120549343341112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhnvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d624ceb-527a-4a10-9ec9-ded3928c6b
a8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78,PodSandboxId:f58535f3482350168b83c56aefa76f093477cb6abba709876469af3f3a69553c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730120538177331455
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c079761315b4bc666e1cabcd79204c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40,PodSandboxId:86984d33d56b9c239ef50057db77624408dda63985327fa725b2ade354589585,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730120538143
456483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7,PodSandboxId:24656799c6033ee518e3bf838bdb5263613eb9d077e445afb78312f0e1cfe9de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730120538151343590,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32b6e9db89ff9ee68816f2fc25ba251c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139,PodSandboxId:960b61c8cb8943d9183d6ed499f07d668c6a1c92cafeffba4ad2e2fd8b1247a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730120538065606191,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c85b559c3ce03721f949a025c7449ef,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41,PodSandboxId:ea12c1dd2e35c44a9e485f28d23788118fda5e9ff4ea7dcbb4998701ceb4aa98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730120250095394795,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-818470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5482f6e3aee8942026c010be39b794,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=922f89db-64eb-4e45-9319-044e9af9c303 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	216d910684a64       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   eafd327cc40dd       kube-proxy-fnp29
	51f00087da19f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   fa512650272ea       storage-provisioner
	c946b22272329       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   9f9b8bb378fa0       coredns-7c65d6cfc9-qcqc4
	ccb55f7524b1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   ce1a1a104c081       coredns-7c65d6cfc9-dhnvt
	fd43e7c634614       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   f58535f348235       kube-controller-manager-embed-certs-818470
	a013983c01bbd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   24656799c6033       kube-scheduler-embed-certs-818470
	ae1e4e7f8dc7e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   86984d33d56b9       kube-apiserver-embed-certs-818470
	1d3bb04bab09a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   960b61c8cb894       etcd-embed-certs-818470
	bf46bfafb0773       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   ea12c1dd2e35c       kube-apiserver-embed-certs-818470
	
	
	==> coredns [c946b22272329ebfff89a97c58d7f03b160821ba39d5b6618e53b62d4d5b41fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ccb55f7524b1b61e3e2d79cd13d5bdedf06cdf8bb4730d0b9e88593907359065] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-818470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-818470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=embed-certs-818470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T13_02_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 13:02:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-818470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:18:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:17:51 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:17:51 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:17:51 +0000   Mon, 28 Oct 2024 13:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:17:51 +0000   Mon, 28 Oct 2024 13:02:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    embed-certs-818470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb71a22bbf964c239bcc801ef66a0686
	  System UUID:                fb71a22b-bf96-4c23-9bcc-801ef66a0686
	  Boot ID:                    05767ac6-cbb1-40dd-a742-a92355748028
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dhnvt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-qcqc4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-818470                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-818470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-818470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-fnp29                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-818470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-gch8d               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-818470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-818470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-818470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-818470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-818470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-818470 event: Registered Node embed-certs-818470 in Controller
	
	
	==> dmesg <==
	[  +0.063255] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041933] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.159918] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.913825] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.564294] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.986530] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.057750] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059955] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.180714] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.141216] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.275098] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +3.759523] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +1.881414] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.061451] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.506624] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.163782] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 13:02] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +0.058103] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.008876] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.967478] systemd-fstab-generator[2902]: Ignoring "noauto" option for root device
	[  +5.858477] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[  +0.096256] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.908851] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1d3bb04bab09a10ccaa5a1981d37e7586a08c60aa09e283e73a18f5651253139] <==
	{"level":"info","ts":"2024-10-28T13:13:32.616685Z","caller":"traceutil/trace.go:171","msg":"trace[459874839] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1027; }","duration":"183.729846ms","start":"2024-10-28T13:13:32.432947Z","end":"2024-10-28T13:13:32.616677Z","steps":["trace[459874839] 'agreement among raft nodes before linearized reading'  (duration: 183.662502ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:13:32.718699Z","caller":"traceutil/trace.go:171","msg":"trace[1749602257] linearizableReadLoop","detail":"{readStateIndex:1177; appliedIndex:1176; }","duration":"100.735508ms","start":"2024-10-28T13:13:32.617949Z","end":"2024-10-28T13:13:32.718685Z","steps":["trace[1749602257] 'read index received'  (duration: 98.996476ms)","trace[1749602257] 'applied index is now lower than readState.Index'  (duration: 1.738639ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:13:32.718844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.872393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:13:32.718891Z","caller":"traceutil/trace.go:171","msg":"trace[1832487936] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1028; }","duration":"100.937663ms","start":"2024-10-28T13:13:32.617946Z","end":"2024-10-28T13:13:32.718884Z","steps":["trace[1832487936] 'agreement among raft nodes before linearized reading'  (duration: 100.816699ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:13:32.719011Z","caller":"traceutil/trace.go:171","msg":"trace[714854085] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"101.798023ms","start":"2024-10-28T13:13:32.617159Z","end":"2024-10-28T13:13:32.718957Z","steps":["trace[714854085] 'process raft request'  (duration: 99.867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:16:12.410311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.918568ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14310067776874148505 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.164\" mod_revision:1148 > success:<request_put:<key:\"/registry/masterleases/192.168.50.164\" value_size:67 lease:5086695740019372695 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.164\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T13:16:12.410483Z","caller":"traceutil/trace.go:171","msg":"trace[1494908738] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"191.134764ms","start":"2024-10-28T13:16:12.219322Z","end":"2024-10-28T13:16:12.410457Z","steps":["trace[1494908738] 'process raft request'  (duration: 62.871304ms)","trace[1494908738] 'compare'  (duration: 127.804544ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T13:16:31.070600Z","caller":"traceutil/trace.go:171","msg":"trace[1586310409] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"343.140757ms","start":"2024-10-28T13:16:30.727443Z","end":"2024-10-28T13:16:31.070584Z","steps":["trace[1586310409] 'process raft request'  (duration: 342.991185ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:16:31.070809Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:16:30.727425Z","time spent":"343.319787ms","remote":"127.0.0.1:49698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1169 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-28T13:16:31.294382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.120613ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:16:31.294526Z","caller":"traceutil/trace.go:171","msg":"trace[559666040] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1171; }","duration":"161.291764ms","start":"2024-10-28T13:16:31.133219Z","end":"2024-10-28T13:16:31.294511Z","steps":["trace[559666040] 'range keys from in-memory index tree'  (duration: 161.105729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:16:31.294381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.096321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:16:31.294647Z","caller":"traceutil/trace.go:171","msg":"trace[677627206] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1171; }","duration":"104.362387ms","start":"2024-10-28T13:16:31.190271Z","end":"2024-10-28T13:16:31.294634Z","steps":["trace[677627206] 'count revisions from in-memory index tree'  (duration: 104.044894ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:17:19.022862Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-10-28T13:17:19.027416Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"3.882147ms","hash":1880067880,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1662976,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-28T13:17:19.027503Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1880067880,"revision":966,"compact-revision":723}
	{"level":"info","ts":"2024-10-28T13:17:25.593886Z","caller":"traceutil/trace.go:171","msg":"trace[353009841] linearizableReadLoop","detail":"{readStateIndex:1411; appliedIndex:1410; }","duration":"171.587401ms","start":"2024-10-28T13:17:25.422275Z","end":"2024-10-28T13:17:25.593863Z","steps":["trace[353009841] 'read index received'  (duration: 171.423891ms)","trace[353009841] 'applied index is now lower than readState.Index'  (duration: 162.826µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:17:25.594187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.882929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:17:25.594674Z","caller":"traceutil/trace.go:171","msg":"trace[1966053834] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1215; }","duration":"172.390687ms","start":"2024-10-28T13:17:25.422270Z","end":"2024-10-28T13:17:25.594661Z","steps":["trace[1966053834] 'agreement among raft nodes before linearized reading'  (duration: 171.853902ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:17:25.594237Z","caller":"traceutil/trace.go:171","msg":"trace[1231804886] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"267.92935ms","start":"2024-10-28T13:17:25.326294Z","end":"2024-10-28T13:17:25.594223Z","steps":["trace[1231804886] 'process raft request'  (duration: 267.453792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:17:25.851927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.6975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:17:25.852097Z","caller":"traceutil/trace.go:171","msg":"trace[570755455] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1215; }","duration":"147.878861ms","start":"2024-10-28T13:17:25.704201Z","end":"2024-10-28T13:17:25.852080Z","steps":["trace[570755455] 'count revisions from in-memory index tree'  (duration: 147.638013ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:18:05.907623Z","caller":"traceutil/trace.go:171","msg":"trace[1455198952] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"107.971701ms","start":"2024-10-28T13:18:05.799615Z","end":"2024-10-28T13:18:05.907587Z","steps":["trace[1455198952] 'process raft request'  (duration: 107.825532ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:18:06.164277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.399744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:18:06.165029Z","caller":"traceutil/trace.go:171","msg":"trace[1804634891] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1249; }","duration":"138.126003ms","start":"2024-10-28T13:18:06.026812Z","end":"2024-10-28T13:18:06.164938Z","steps":["trace[1804634891] 'count revisions from in-memory index tree'  (duration: 137.291191ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:18:08 up 21 min,  0 users,  load average: 0.15, 0.12, 0.09
	Linux embed-certs-818470 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae1e4e7f8dc7e76982e2f6edffc8cccf17ad54c3a1f913c181fb1628f669cd40] <==
	I1028 13:13:21.371859       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:13:21.371903       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:15:21.373016       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 13:15:21.373050       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:15:21.373288       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1028 13:15:21.373339       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:15:21.374530       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:15:21.374554       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:17:20.372633       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:17:20.373175       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:17:21.375122       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 13:17:21.375196       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:17:21.375374       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 13:17:21.375422       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:17:21.376584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:17:21.376640       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bf46bfafb0773bf96b2dfd3a4bacbd08ce4b0de414738bf7c4b8fcb484aa6a41] <==
	W1028 13:02:10.420022       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.458107       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.461596       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.485440       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.488936       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.506155       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.613252       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.665321       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.694307       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.713176       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.753044       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.769128       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.861761       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:10.882857       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.121152       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.132643       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.164569       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.234519       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:11.383470       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:12.753571       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.699502       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.938381       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:14.965617       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:15.030895       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 13:02:15.154411       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [fd43e7c634614eaea00a1406507d6ec94ba3c171f07060e522608afec0df6b78] <==
	E1028 13:12:57.455778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:12:57.909895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:13:27.463260       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:13:27.918299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:13:32.350707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="936.787µs"
	I1028 13:13:46.775531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.98µs"
	E1028 13:13:57.469665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:13:57.926274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:14:27.476727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:14:27.934302       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:14:57.483841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:14:57.942005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:15:27.490375       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:15:27.949457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:15:57.496778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:15:57.958240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:16:27.503826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:16:27.966571       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:16:57.509954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:16:57.974350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:17:27.518172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:17:27.982721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:17:51.853954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-818470"
	E1028 13:17:57.524831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:17:57.991481       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [216d910684a64fa244dc16757c05ed4d3a28b9dfdf00096ad072b9e6c3c7e5b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 13:02:30.154068       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 13:02:30.162398       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.164"]
	E1028 13:02:30.162570       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 13:02:30.191092       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 13:02:30.191127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 13:02:30.191156       1 server_linux.go:169] "Using iptables Proxier"
	I1028 13:02:30.193299       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 13:02:30.193783       1 server.go:483] "Version info" version="v1.31.2"
	I1028 13:02:30.193828       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:02:30.195176       1 config.go:199] "Starting service config controller"
	I1028 13:02:30.195228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 13:02:30.195268       1 config.go:105] "Starting endpoint slice config controller"
	I1028 13:02:30.195294       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 13:02:30.195737       1 config.go:328] "Starting node config controller"
	I1028 13:02:30.197463       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 13:02:30.296405       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 13:02:30.296455       1 shared_informer.go:320] Caches are synced for service config
	I1028 13:02:30.297831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a013983c01bbd99d5d9b29f995696d58d8be0e044d98783e1cd89829392de0c7] <==
	W1028 13:02:20.394637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 13:02:20.394735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.220079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 13:02:21.220189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.227673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.227757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.241094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 13:02:21.241134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.298824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 13:02:21.298877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.350364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.350408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.413320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 13:02:21.413368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.415045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 13:02:21.415083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.509575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 13:02:21.509694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.541518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.541569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 13:02:21.541691       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 13:02:21.541718       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 13:02:21.603096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 13:02:21.603166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 13:02:23.586477       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:17:00 embed-certs-818470 kubelet[2909]: E1028 13:17:00.760577    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:17:02 embed-certs-818470 kubelet[2909]: E1028 13:17:02.996036    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121422995760862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:02 embed-certs-818470 kubelet[2909]: E1028 13:17:02.996076    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121422995760862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:12 embed-certs-818470 kubelet[2909]: E1028 13:17:12.998765    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121432998377136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:12 embed-certs-818470 kubelet[2909]: E1028 13:17:12.998814    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121432998377136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:14 embed-certs-818470 kubelet[2909]: E1028 13:17:14.761278    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:17:22 embed-certs-818470 kubelet[2909]: E1028 13:17:22.777961    2909 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:17:22 embed-certs-818470 kubelet[2909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:17:22 embed-certs-818470 kubelet[2909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:17:22 embed-certs-818470 kubelet[2909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:17:22 embed-certs-818470 kubelet[2909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:17:23 embed-certs-818470 kubelet[2909]: E1028 13:17:23.000667    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121443000144370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:23 embed-certs-818470 kubelet[2909]: E1028 13:17:23.000938    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121443000144370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:27 embed-certs-818470 kubelet[2909]: E1028 13:17:27.760508    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:17:33 embed-certs-818470 kubelet[2909]: E1028 13:17:33.003666    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121453003213091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:33 embed-certs-818470 kubelet[2909]: E1028 13:17:33.004129    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121453003213091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:41 embed-certs-818470 kubelet[2909]: E1028 13:17:41.760910    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:17:43 embed-certs-818470 kubelet[2909]: E1028 13:17:43.006564    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463005653353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:43 embed-certs-818470 kubelet[2909]: E1028 13:17:43.007055    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121463005653353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:53 embed-certs-818470 kubelet[2909]: E1028 13:17:53.008367    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121473008076801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:53 embed-certs-818470 kubelet[2909]: E1028 13:17:53.008635    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121473008076801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:17:55 embed-certs-818470 kubelet[2909]: E1028 13:17:55.761213    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	Oct 28 13:18:03 embed-certs-818470 kubelet[2909]: E1028 13:18:03.011621    2909 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121483011049886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:18:03 embed-certs-818470 kubelet[2909]: E1028 13:18:03.012061    2909 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121483011049886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:18:07 embed-certs-818470 kubelet[2909]: E1028 13:18:07.760307    2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gch8d" podUID="55392b3f-3144-428f-b8aa-d0a45b9b8116"
	
	
	==> storage-provisioner [51f00087da19f2833eca7a813bf9962443be8d34e686ea9ff42607e6a4800677] <==
	I1028 13:02:29.806264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 13:02:29.837690       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 13:02:29.837741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 13:02:29.865797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 13:02:29.865964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3!
	I1028 13:02:29.866057       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1388a12-67ad-42ed-908d-5ed5e6961363", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3 became leader
	I1028 13:02:29.972174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-818470_cf509ba4-379a-473a-822b-0391becb58d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818470 -n embed-certs-818470
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-818470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gch8d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d: exit status 1 (71.703378ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gch8d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-818470 describe pod metrics-server-6867b74b74-gch8d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (107.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
E1028 13:14:20.375837   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.208:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (222.580201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-733464" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-733464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-733464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.235µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-733464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (217.436788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-733464 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-717454                              | cert-expiration-717454       | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:48 UTC |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:48 UTC | 28 Oct 24 12:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-818470            | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-702694             | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC | 28 Oct 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC | 28 Oct 24 12:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-733464        | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 12:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-818470                 | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-818470                                  | embed-certs-818470           | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC | 28 Oct 24 13:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-702694                  | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-702694                                   | no-preload-702694            | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 13:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-733464             | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC | 28 Oct 24 12:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-733464                              | old-k8s-version-733464       | jenkins | v1.34.0 | 28 Oct 24 12:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-868919                           | kubernetes-upgrade-868919    | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-213407 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:04 UTC |
	|         | disable-driver-mounts-213407                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:04 UTC | 28 Oct 24 13:05 UTC |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-783661  | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC | 28 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:05 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783661       | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-783661 | jenkins | v1.34.0 | 28 Oct 24 13:08 UTC |                     |
	|         | default-k8s-diff-port-783661                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:08:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:08:22.743907  134197 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:08:22.744028  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744040  134197 out.go:358] Setting ErrFile to fd 2...
	I1028 13:08:22.744047  134197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:08:22.744230  134197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:08:22.744750  134197 out.go:352] Setting JSON to false
	I1028 13:08:22.745654  134197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10253,"bootTime":1730110650,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:08:22.745744  134197 start.go:139] virtualization: kvm guest
	I1028 13:08:22.747939  134197 out.go:177] * [default-k8s-diff-port-783661] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:08:22.749403  134197 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:08:22.749457  134197 notify.go:220] Checking for updates...
	I1028 13:08:22.751796  134197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:08:22.753005  134197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:08:22.754141  134197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:08:22.755335  134197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:08:22.756546  134197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:08:22.758122  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:08:22.758528  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.758586  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.773341  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I1028 13:08:22.773804  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.774488  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.774519  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.774851  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.775031  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.775267  134197 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:08:22.775558  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.775601  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.789667  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I1028 13:08:22.790111  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.790632  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.790659  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.791008  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.791222  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.825579  134197 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 13:08:22.826616  134197 start.go:297] selected driver: kvm2
	I1028 13:08:22.826631  134197 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.826749  134197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:08:22.827454  134197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.827533  134197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:08:22.841833  134197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:08:22.842206  134197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:08:22.842238  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:08:22.842287  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:08:22.842319  134197 start.go:340] cluster config:
	{Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:08:22.842425  134197 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:08:22.844980  134197 out.go:177] * Starting "default-k8s-diff-port-783661" primary control-plane node in "default-k8s-diff-port-783661" cluster
	I1028 13:08:22.846171  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:08:22.846203  134197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:08:22.846210  134197 cache.go:56] Caching tarball of preloaded images
	I1028 13:08:22.846302  134197 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:08:22.846315  134197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:08:22.846407  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:08:22.846587  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:08:22.846633  134197 start.go:364] duration metric: took 26.842µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:08:22.846652  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:08:22.846661  134197 fix.go:54] fixHost starting: 
	I1028 13:08:22.846932  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:08:22.846968  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:08:22.860395  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1028 13:08:22.860752  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:08:22.861207  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:08:22.861239  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:08:22.861578  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:08:22.861740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.861874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:08:22.863378  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Running err=<nil>
	W1028 13:08:22.863410  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:08:22.865166  134197 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-783661" VM ...
	I1028 13:08:22.866336  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:08:22.866355  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:08:22.866529  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:08:22.869364  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.869837  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:05:00 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:08:22.869861  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:08:22.870068  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:08:22.870245  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870416  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:08:22.870528  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:08:22.870703  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:08:22.870930  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:08:22.870946  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:08:25.759930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:28.831940  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:34.911959  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:37.983844  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:44.063898  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:47.135931  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:56.256018  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:08:59.327922  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:05.407915  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:08.479971  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:14.559886  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:17.635930  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:23.711861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:26.783972  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:32.863862  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:35.935864  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:42.015884  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:45.091903  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:51.167873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:09:54.239919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:00.319846  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:03.391949  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:09.471853  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:12.543958  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:18.623893  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:21.695970  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:27.775910  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:30.851880  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:36.927896  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:39.999969  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:46.079860  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:49.151950  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:55.231873  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:10:58.304033  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:04.383879  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:07.455895  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:13.535868  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:16.607992  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:22.691863  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:25.759911  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:31.839918  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:34.915917  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:40.991816  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:44.063821  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:50.143851  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:53.215876  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:11:59.295883  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:02.367891  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:08.447861  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:11.519919  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:17.599962  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:20.671890  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:26.751894  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:29.823995  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:35.903877  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:38.975878  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:45.055820  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:48.127923  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:54.207852  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:12:57.279901  134197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.58:22: connect: no route to host
	I1028 13:13:00.282367  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:13:00.282410  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:00.282710  134197 buildroot.go:166] provisioning hostname "default-k8s-diff-port-783661"
	I1028 13:13:00.282740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:00.282912  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:00.284376  134197 machine.go:96] duration metric: took 4m37.418023894s to provisionDockerMachine
	I1028 13:13:00.284414  134197 fix.go:56] duration metric: took 4m37.437752982s for fixHost
	I1028 13:13:00.284426  134197 start.go:83] releasing machines lock for "default-k8s-diff-port-783661", held for 4m37.437782013s
	W1028 13:13:00.284446  134197 start.go:714] error starting host: provision: host is not running
	W1028 13:13:00.284577  134197 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 13:13:00.284588  134197 start.go:729] Will try again in 5 seconds ...
	I1028 13:13:05.286973  134197 start.go:360] acquireMachinesLock for default-k8s-diff-port-783661: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:13:05.287087  134197 start.go:364] duration metric: took 72.329µs to acquireMachinesLock for "default-k8s-diff-port-783661"
	I1028 13:13:05.287116  134197 start.go:96] Skipping create...Using existing machine configuration
	I1028 13:13:05.287124  134197 fix.go:54] fixHost starting: 
	I1028 13:13:05.287464  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:05.287491  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:05.302541  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I1028 13:13:05.303110  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:05.303659  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:05.303684  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:05.304035  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:05.304229  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:05.304406  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:05.305973  134197 fix.go:112] recreateIfNeeded on default-k8s-diff-port-783661: state=Stopped err=<nil>
	I1028 13:13:05.305996  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	W1028 13:13:05.306168  134197 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 13:13:05.308037  134197 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-783661" ...
	I1028 13:13:05.309346  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Start
	I1028 13:13:05.309513  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring networks are active...
	I1028 13:13:05.310213  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network default is active
	I1028 13:13:05.310554  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Ensuring network mk-default-k8s-diff-port-783661 is active
	I1028 13:13:05.311086  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Getting domain xml...
	I1028 13:13:05.311852  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Creating domain...
	I1028 13:13:06.540494  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting to get IP...
	I1028 13:13:06.541481  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.541978  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.542062  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:06.541938  135448 retry.go:31] will retry after 231.647331ms: waiting for machine to come up
	I1028 13:13:06.775409  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.775987  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:06.776017  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:06.775942  135448 retry.go:31] will retry after 239.756878ms: waiting for machine to come up
	I1028 13:13:07.017477  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.018004  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.018032  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:07.017953  135448 retry.go:31] will retry after 422.324589ms: waiting for machine to come up
	I1028 13:13:07.441468  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.441999  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:07.442037  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:07.441939  135448 retry.go:31] will retry after 578.443419ms: waiting for machine to come up
	I1028 13:13:08.021645  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.022146  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.022178  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:08.022086  135448 retry.go:31] will retry after 647.039207ms: waiting for machine to come up
	I1028 13:13:08.670333  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.670868  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:08.670892  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:08.670811  135448 retry.go:31] will retry after 714.058494ms: waiting for machine to come up
	I1028 13:13:09.386779  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:09.387215  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:09.387243  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:09.387168  135448 retry.go:31] will retry after 894.856792ms: waiting for machine to come up
	I1028 13:13:10.283188  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:10.283686  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:10.283718  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:10.283624  135448 retry.go:31] will retry after 1.265291459s: waiting for machine to come up
	I1028 13:13:11.550244  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:11.550726  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:11.550749  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:11.550654  135448 retry.go:31] will retry after 1.249743184s: waiting for machine to come up
	I1028 13:13:12.801975  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:12.802396  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:12.802410  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:12.802366  135448 retry.go:31] will retry after 2.31180583s: waiting for machine to come up
	I1028 13:13:15.116926  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:15.117467  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:15.117496  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:15.117428  135448 retry.go:31] will retry after 2.267258035s: waiting for machine to come up
	I1028 13:13:17.387100  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:17.387516  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:17.387548  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:17.387478  135448 retry.go:31] will retry after 2.277192393s: waiting for machine to come up
	I1028 13:13:19.666742  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:19.667120  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | unable to find current IP address of domain default-k8s-diff-port-783661 in network mk-default-k8s-diff-port-783661
	I1028 13:13:19.667150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | I1028 13:13:19.667075  135448 retry.go:31] will retry after 3.233541624s: waiting for machine to come up
	I1028 13:13:22.903660  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.904189  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Found IP for machine: 192.168.61.58
	I1028 13:13:22.904219  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has current primary IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.904225  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Reserving static IP address...
	I1028 13:13:22.904647  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-783661", mac: "52:54:00:07:89:7c", ip: "192.168.61.58"} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:22.904690  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | skip adding static IP to network mk-default-k8s-diff-port-783661 - found existing host DHCP lease matching {name: "default-k8s-diff-port-783661", mac: "52:54:00:07:89:7c", ip: "192.168.61.58"}
	I1028 13:13:22.904721  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Reserved static IP address: 192.168.61.58
	I1028 13:13:22.904740  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Waiting for SSH to be available...
	I1028 13:13:22.904756  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Getting to WaitForSSH function...
	I1028 13:13:22.906960  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.907271  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:22.907295  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:22.907443  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Using SSH client type: external
	I1028 13:13:22.907469  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa (-rw-------)
	I1028 13:13:22.907494  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 13:13:22.907504  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | About to run SSH command:
	I1028 13:13:22.907526  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | exit 0
	I1028 13:13:23.027352  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | SSH cmd err, output: <nil>: 
	I1028 13:13:23.027735  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetConfigRaw
	I1028 13:13:23.028363  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:23.031114  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.031475  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.031508  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.031772  134197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/config.json ...
	I1028 13:13:23.031996  134197 machine.go:93] provisionDockerMachine start ...
	I1028 13:13:23.032018  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:23.032261  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.034841  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.035229  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.035258  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.035396  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.035574  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.035752  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.035900  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.036048  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.036241  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.036252  134197 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 13:13:23.131447  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 13:13:23.131477  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.131732  134197 buildroot.go:166] provisioning hostname "default-k8s-diff-port-783661"
	I1028 13:13:23.131767  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.131952  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.134431  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.134729  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.134755  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.134875  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.135054  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.135195  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.135337  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.135498  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.135705  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.135726  134197 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-783661 && echo "default-k8s-diff-port-783661" | sudo tee /etc/hostname
	I1028 13:13:23.244094  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-783661
	
	I1028 13:13:23.244135  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.246707  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.247039  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.247069  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.247226  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.247405  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.247545  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.247664  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.247836  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.248022  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.248046  134197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-783661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-783661/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-783661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 13:13:23.351444  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:13:23.351480  134197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 13:13:23.351510  134197 buildroot.go:174] setting up certificates
	I1028 13:13:23.351526  134197 provision.go:84] configureAuth start
	I1028 13:13:23.351536  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetMachineName
	I1028 13:13:23.351842  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:23.354294  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.354607  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.354633  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.354785  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.356931  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.357242  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.357263  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.357408  134197 provision.go:143] copyHostCerts
	I1028 13:13:23.357480  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 13:13:23.357494  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 13:13:23.357556  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 13:13:23.357663  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 13:13:23.357671  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 13:13:23.357697  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 13:13:23.357770  134197 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 13:13:23.357777  134197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 13:13:23.357803  134197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 13:13:23.357864  134197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-783661 san=[127.0.0.1 192.168.61.58 default-k8s-diff-port-783661 localhost minikube]
	I1028 13:13:23.500838  134197 provision.go:177] copyRemoteCerts
	I1028 13:13:23.500902  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 13:13:23.500927  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.503917  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.504289  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.504316  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.504498  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.504694  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.504874  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.505018  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:23.580704  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 13:13:23.602410  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 13:13:23.623660  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 13:13:23.646050  134197 provision.go:87] duration metric: took 294.509447ms to configureAuth
	I1028 13:13:23.646084  134197 buildroot.go:189] setting minikube options for container-runtime
	I1028 13:13:23.646294  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:13:23.646385  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.649055  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.649434  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.649465  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.649715  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.649912  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.650067  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.650166  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.650329  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.650512  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.650530  134197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 13:13:23.853315  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 13:13:23.853340  134197 machine.go:96] duration metric: took 821.330249ms to provisionDockerMachine
	I1028 13:13:23.853353  134197 start.go:293] postStartSetup for "default-k8s-diff-port-783661" (driver="kvm2")
	I1028 13:13:23.853365  134197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 13:13:23.853409  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:23.853730  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 13:13:23.853758  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.856419  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.856746  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.856777  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.856883  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.857052  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.857219  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.857341  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:23.933578  134197 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 13:13:23.937169  134197 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 13:13:23.937202  134197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 13:13:23.937278  134197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 13:13:23.937367  134197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 13:13:23.937486  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 13:13:23.945951  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:13:23.967255  134197 start.go:296] duration metric: took 113.888302ms for postStartSetup
	I1028 13:13:23.967294  134197 fix.go:56] duration metric: took 18.680170342s for fixHost
	I1028 13:13:23.967316  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:23.969931  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.970289  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:23.970319  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:23.970502  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:23.970696  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.970868  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:23.970994  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:23.971144  134197 main.go:141] libmachine: Using SSH client type: native
	I1028 13:13:23.971347  134197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I1028 13:13:23.971362  134197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 13:13:24.067579  134197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730121204.042610744
	
	I1028 13:13:24.067601  134197 fix.go:216] guest clock: 1730121204.042610744
	I1028 13:13:24.067610  134197 fix.go:229] Guest: 2024-10-28 13:13:24.042610744 +0000 UTC Remote: 2024-10-28 13:13:23.967298865 +0000 UTC m=+301.263399635 (delta=75.311879ms)
	I1028 13:13:24.067656  134197 fix.go:200] guest clock delta is within tolerance: 75.311879ms
	I1028 13:13:24.067663  134197 start.go:83] releasing machines lock for "default-k8s-diff-port-783661", held for 18.78056169s
	I1028 13:13:24.067691  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.067935  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:24.070598  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.070986  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.071026  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.071308  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.071858  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.072056  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:24.072173  134197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 13:13:24.072241  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:24.072334  134197 ssh_runner.go:195] Run: cat /version.json
	I1028 13:13:24.072362  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:24.075272  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075444  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075579  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.075605  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075743  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:24.075831  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:24.075864  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:24.075885  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:24.076024  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:24.076073  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:24.076150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:24.076220  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:24.076318  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:24.076449  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:24.148656  134197 ssh_runner.go:195] Run: systemctl --version
	I1028 13:13:24.173826  134197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 13:13:24.314420  134197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 13:13:24.320964  134197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 13:13:24.321040  134197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 13:13:24.336093  134197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 13:13:24.336114  134197 start.go:495] detecting cgroup driver to use...
	I1028 13:13:24.336176  134197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 13:13:24.355586  134197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 13:13:24.369613  134197 docker.go:217] disabling cri-docker service (if available) ...
	I1028 13:13:24.369661  134197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 13:13:24.383661  134197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 13:13:24.397552  134197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 13:13:24.517746  134197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 13:13:24.667013  134197 docker.go:233] disabling docker service ...
	I1028 13:13:24.667115  134197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 13:13:24.680756  134197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 13:13:24.692610  134197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 13:13:24.812530  134197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 13:13:24.921788  134197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 13:13:24.934431  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 13:13:24.950796  134197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 13:13:24.950855  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.959904  134197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 13:13:24.959974  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.968923  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.977711  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:24.986789  134197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 13:13:24.996658  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.005472  134197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.020549  134197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:13:25.029317  134197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 13:13:25.037514  134197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 13:13:25.037614  134197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 13:13:25.050018  134197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 13:13:25.058328  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:25.164529  134197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 13:13:25.248691  134197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 13:13:25.248759  134197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 13:13:25.252922  134197 start.go:563] Will wait 60s for crictl version
	I1028 13:13:25.252997  134197 ssh_runner.go:195] Run: which crictl
	I1028 13:13:25.256182  134197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 13:13:25.294375  134197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 13:13:25.294522  134197 ssh_runner.go:195] Run: crio --version
	I1028 13:13:25.321489  134197 ssh_runner.go:195] Run: crio --version
	I1028 13:13:25.349730  134197 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 13:13:25.351032  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetIP
	I1028 13:13:25.353570  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:25.353919  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:25.353944  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:25.354159  134197 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 13:13:25.357796  134197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:13:25.369212  134197 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 13:13:25.369364  134197 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:13:25.369421  134197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:13:25.400975  134197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 13:13:25.401039  134197 ssh_runner.go:195] Run: which lz4
	I1028 13:13:25.404590  134197 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 13:13:25.408131  134197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 13:13:25.408164  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 13:13:26.592887  134197 crio.go:462] duration metric: took 1.18831143s to copy over tarball
	I1028 13:13:26.592984  134197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 13:13:28.669692  134197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.07667117s)
	I1028 13:13:28.669728  134197 crio.go:469] duration metric: took 2.076802189s to extract the tarball
	I1028 13:13:28.669739  134197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 13:13:28.705768  134197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:13:28.746918  134197 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 13:13:28.746943  134197 cache_images.go:84] Images are preloaded, skipping loading
	I1028 13:13:28.746953  134197 kubeadm.go:934] updating node { 192.168.61.58 8444 v1.31.2 crio true true} ...
	I1028 13:13:28.747105  134197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-783661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 13:13:28.747193  134197 ssh_runner.go:195] Run: crio config
	I1028 13:13:28.799814  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:13:28.799844  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:13:28.799866  134197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 13:13:28.799905  134197 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-783661 NodeName:default-k8s-diff-port-783661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 13:13:28.800138  134197 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-783661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 13:13:28.800228  134197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 13:13:28.809781  134197 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 13:13:28.809860  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 13:13:28.818307  134197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 13:13:28.833165  134197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 13:13:28.847557  134197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 13:13:28.862507  134197 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I1028 13:13:28.865883  134197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:13:28.876993  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:29.010474  134197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:13:29.026282  134197 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661 for IP: 192.168.61.58
	I1028 13:13:29.026319  134197 certs.go:194] generating shared ca certs ...
	I1028 13:13:29.026341  134197 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:29.026554  134197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 13:13:29.026615  134197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 13:13:29.026635  134197 certs.go:256] generating profile certs ...
	I1028 13:13:29.026770  134197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/client.key
	I1028 13:13:29.026859  134197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.key.2140521c
	I1028 13:13:29.026902  134197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.key
	I1028 13:13:29.027067  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 13:13:29.027113  134197 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 13:13:29.027129  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 13:13:29.027183  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 13:13:29.027218  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 13:13:29.027256  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 13:13:29.027314  134197 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:13:29.028337  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 13:13:29.059748  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 13:13:29.090749  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 13:13:29.118669  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 13:13:29.145013  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 13:13:29.176049  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 13:13:29.199479  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 13:13:29.225368  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/default-k8s-diff-port-783661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 13:13:29.248427  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 13:13:29.270163  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 13:13:29.291310  134197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 13:13:29.313075  134197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 13:13:29.329050  134197 ssh_runner.go:195] Run: openssl version
	I1028 13:13:29.334785  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 13:13:29.345731  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.349902  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.349950  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 13:13:29.355107  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 13:13:29.364475  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 13:13:29.373697  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.377792  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.377850  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:13:29.382892  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 13:13:29.392054  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 13:13:29.402513  134197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.406438  134197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.406511  134197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 13:13:29.411444  134197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 13:13:29.420742  134197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 13:13:29.428743  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 13:13:29.435065  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 13:13:29.440678  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 13:13:29.445930  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 13:13:29.451012  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 13:13:29.456345  134197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 13:13:29.461609  134197 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-783661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-783661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:13:29.461691  134197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 13:13:29.461725  134197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:13:29.496024  134197 cri.go:89] found id: ""
	I1028 13:13:29.496095  134197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 13:13:29.505387  134197 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 13:13:29.505404  134197 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 13:13:29.505449  134197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 13:13:29.514612  134197 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 13:13:29.515716  134197 kubeconfig.go:125] found "default-k8s-diff-port-783661" server: "https://192.168.61.58:8444"
	I1028 13:13:29.518400  134197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 13:13:29.527127  134197 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.58
	I1028 13:13:29.527152  134197 kubeadm.go:1160] stopping kube-system containers ...
	I1028 13:13:29.527165  134197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 13:13:29.527207  134197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:13:29.562704  134197 cri.go:89] found id: ""
	I1028 13:13:29.562779  134197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 13:13:29.579423  134197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:13:29.588397  134197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:13:29.588431  134197 kubeadm.go:157] found existing configuration files:
	
	I1028 13:13:29.588480  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 13:13:29.597602  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:13:29.597671  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:13:29.606595  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 13:13:29.614682  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:13:29.614734  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:13:29.622987  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 13:13:29.630860  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:13:29.630910  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:13:29.639251  134197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 13:13:29.647268  134197 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:13:29.647317  134197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:13:29.655608  134197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:13:29.664127  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:29.763979  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.190931  134197 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.426908776s)
	I1028 13:13:31.190975  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.380916  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.444452  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:31.511848  134197 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:13:31.511952  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:32.013005  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:32.512883  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:33.012777  134197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:13:33.048586  134197 api_server.go:72] duration metric: took 1.536736279s to wait for apiserver process to appear ...
	I1028 13:13:33.048616  134197 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:13:33.048643  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:33.049178  134197 api_server.go:269] stopped: https://192.168.61.58:8444/healthz: Get "https://192.168.61.58:8444/healthz": dial tcp 192.168.61.58:8444: connect: connection refused
	I1028 13:13:33.548706  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.090092  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 13:13:36.090127  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 13:13:36.090145  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.149045  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:36.149077  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:36.549621  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:36.555510  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:36.555539  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:37.049002  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:37.057764  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 13:13:37.057791  134197 api_server.go:103] status: https://192.168.61.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 13:13:37.549545  134197 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8444/healthz ...
	I1028 13:13:37.554197  134197 api_server.go:279] https://192.168.61.58:8444/healthz returned 200:
	ok
	I1028 13:13:37.564130  134197 api_server.go:141] control plane version: v1.31.2
	I1028 13:13:37.564158  134197 api_server.go:131] duration metric: took 4.515535111s to wait for apiserver health ...
	I1028 13:13:37.564168  134197 cni.go:84] Creating CNI manager for ""
	I1028 13:13:37.564174  134197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 13:13:37.566201  134197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 13:13:37.567535  134197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 13:13:37.577171  134197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 13:13:37.594324  134197 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:13:37.605014  134197 system_pods.go:59] 8 kube-system pods found
	I1028 13:13:37.605066  134197 system_pods.go:61] "coredns-7c65d6cfc9-x8gvd" [4498824f-7ce1-4167-8701-74cadd3fa83c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 13:13:37.605076  134197 system_pods.go:61] "etcd-default-k8s-diff-port-783661" [9a8a5a39-b0bb-4144-9e70-98fed2bbc838] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 13:13:37.605083  134197 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-783661" [e221604a-5b54-4755-952d-0c699167f402] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 13:13:37.605089  134197 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-783661" [95e9472e-3c24-4fd8-b79c-949d8cd980da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 13:13:37.605101  134197 system_pods.go:61] "kube-proxy-ff797" [ed2dce0b-4dc9-406e-a9c3-f91d75fa0876] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 13:13:37.605106  134197 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-783661" [7cab2cef-dacb-4943-9564-a1a625afa198] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 13:13:37.605113  134197 system_pods.go:61] "metrics-server-6867b74b74-rkx62" [31c37fb4-0650-481d-b1e3-4956769450d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 13:13:37.605118  134197 system_pods.go:61] "storage-provisioner" [21a53238-251d-4581-b4c3-3a788545ab0c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 13:13:37.605127  134197 system_pods.go:74] duration metric: took 10.78446ms to wait for pod list to return data ...
	I1028 13:13:37.605135  134197 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:13:37.610793  134197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:13:37.610817  134197 node_conditions.go:123] node cpu capacity is 2
	I1028 13:13:37.610830  134197 node_conditions.go:105] duration metric: took 5.689372ms to run NodePressure ...
	I1028 13:13:37.610855  134197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 13:13:37.889577  134197 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 13:13:37.893705  134197 kubeadm.go:739] kubelet initialised
	I1028 13:13:37.893729  134197 kubeadm.go:740] duration metric: took 4.119893ms waiting for restarted kubelet to initialise ...
	I1028 13:13:37.893753  134197 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:37.899304  134197 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.903662  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.903687  134197 pod_ready.go:82] duration metric: took 4.360023ms for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.903698  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.903710  134197 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.907223  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.907239  134197 pod_ready.go:82] duration metric: took 3.518315ms for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.907251  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.907257  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.911026  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.911043  134197 pod_ready.go:82] duration metric: took 3.780236ms for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.911051  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.911057  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:37.997939  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.997962  134197 pod_ready.go:82] duration metric: took 86.896486ms for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:37.997972  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:37.997979  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:38.397652  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-proxy-ff797" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.397683  134197 pod_ready.go:82] duration metric: took 399.693086ms for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:38.397694  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-proxy-ff797" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.397701  134197 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:38.797922  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.797955  134197 pod_ready.go:82] duration metric: took 400.242965ms for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:38.797985  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:38.797997  134197 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:39.197558  134197 pod_ready.go:98] node "default-k8s-diff-port-783661" hosting pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:39.197592  134197 pod_ready.go:82] duration metric: took 399.575732ms for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	E1028 13:13:39.197604  134197 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-783661" hosting pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:39.197612  134197 pod_ready.go:39] duration metric: took 1.303837299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:39.197634  134197 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:13:39.210450  134197 ops.go:34] apiserver oom_adj: -16
	I1028 13:13:39.210472  134197 kubeadm.go:597] duration metric: took 9.705061723s to restartPrimaryControlPlane
	I1028 13:13:39.210482  134197 kubeadm.go:394] duration metric: took 9.74887869s to StartCluster
	I1028 13:13:39.210501  134197 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:39.210585  134197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:13:39.212960  134197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:13:39.213234  134197 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:13:39.213297  134197 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:13:39.213409  134197 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213413  134197 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213441  134197 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.213454  134197 addons.go:243] addon storage-provisioner should already be in state true
	I1028 13:13:39.213453  134197 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:13:39.213461  134197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-783661"
	I1028 13:13:39.213485  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.213475  134197 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-783661"
	I1028 13:13:39.213526  134197 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.213543  134197 addons.go:243] addon metrics-server should already be in state true
	I1028 13:13:39.213616  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.213951  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.213989  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.214006  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.214039  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.213996  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.214110  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.215292  134197 out.go:177] * Verifying Kubernetes components...
	I1028 13:13:39.216619  134197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:13:39.229952  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I1028 13:13:39.230093  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I1028 13:13:39.230210  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1028 13:13:39.230480  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.230884  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.231128  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.231197  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.231222  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.231663  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.231736  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.231756  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.232343  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.232410  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.232469  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.233021  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.233049  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.234199  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.234229  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.234607  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.234787  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.238467  134197 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-783661"
	W1028 13:13:39.238500  134197 addons.go:243] addon default-storageclass should already be in state true
	I1028 13:13:39.238532  134197 host.go:66] Checking if "default-k8s-diff-port-783661" exists ...
	I1028 13:13:39.238939  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.238985  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.248564  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1028 13:13:39.249000  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.249552  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.249568  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1028 13:13:39.249576  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.249955  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.250011  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.250348  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.250466  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.250482  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.250839  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.251157  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.252090  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.252962  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.254247  134197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:13:39.255072  134197 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 13:13:39.256106  134197 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:13:39.256129  134197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:13:39.256150  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.256715  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 13:13:39.256730  134197 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 13:13:39.256746  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.259364  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I1028 13:13:39.260132  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.260238  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260596  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.260617  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260758  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.260778  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.260842  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.260892  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.261059  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.261210  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.261234  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.261247  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.261344  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.261496  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.261657  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.261763  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.261871  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.261879  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.262448  134197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:13:39.262479  134197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:13:39.308139  134197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I1028 13:13:39.308709  134197 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:13:39.309316  134197 main.go:141] libmachine: Using API Version  1
	I1028 13:13:39.309344  134197 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:13:39.309738  134197 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:13:39.309932  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetState
	I1028 13:13:39.311478  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .DriverName
	I1028 13:13:39.311716  134197 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:13:39.311733  134197 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:13:39.311751  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHHostname
	I1028 13:13:39.314701  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.315147  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:89:7c", ip: ""} in network mk-default-k8s-diff-port-783661: {Iface:virbr3 ExpiryTime:2024-10-28 14:13:15 +0000 UTC Type:0 Mac:52:54:00:07:89:7c Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:default-k8s-diff-port-783661 Clientid:01:52:54:00:07:89:7c}
	I1028 13:13:39.315181  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | domain default-k8s-diff-port-783661 has defined IP address 192.168.61.58 and MAC address 52:54:00:07:89:7c in network mk-default-k8s-diff-port-783661
	I1028 13:13:39.315333  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHPort
	I1028 13:13:39.315519  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHKeyPath
	I1028 13:13:39.315697  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .GetSSHUsername
	I1028 13:13:39.315849  134197 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/default-k8s-diff-port-783661/id_rsa Username:docker}
	I1028 13:13:39.393200  134197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:13:39.408534  134197 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-783661" to be "Ready" ...
	I1028 13:13:39.501187  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:13:39.531748  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:13:39.544393  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 13:13:39.544418  134197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 13:13:39.594981  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 13:13:39.595012  134197 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 13:13:39.618922  134197 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 13:13:39.618951  134197 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 13:13:39.638636  134197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 13:13:39.962178  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.962205  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.962485  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.962504  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.962519  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:39.962537  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.962548  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.962750  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.962766  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:39.962792  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.972199  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:39.972221  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:39.972480  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:39.972491  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:39.972502  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.655075  134197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123283251s)
	I1028 13:13:40.655142  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.655155  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.655454  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.655502  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.655511  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.655525  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.655553  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.655901  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.655913  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.655927  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747119  134197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.108438664s)
	I1028 13:13:40.747181  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.747196  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.747501  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.747517  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.747530  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747539  134197 main.go:141] libmachine: Making call to close driver server
	I1028 13:13:40.747547  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) Calling .Close
	I1028 13:13:40.747800  134197 main.go:141] libmachine: (default-k8s-diff-port-783661) DBG | Closing plugin on server side
	I1028 13:13:40.747821  134197 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:13:40.747844  134197 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:13:40.747865  134197 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-783661"
	I1028 13:13:40.749733  134197 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 13:13:40.750923  134197 addons.go:510] duration metric: took 1.53763073s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 13:13:41.413083  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:43.912827  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:46.412268  134197 node_ready.go:53] node "default-k8s-diff-port-783661" has status "Ready":"False"
	I1028 13:13:46.913460  134197 node_ready.go:49] node "default-k8s-diff-port-783661" has status "Ready":"True"
	I1028 13:13:46.913489  134197 node_ready.go:38] duration metric: took 7.504910707s for node "default-k8s-diff-port-783661" to be "Ready" ...
	I1028 13:13:46.913499  134197 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:13:46.918312  134197 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.926982  134197 pod_ready.go:93] pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.927003  134197 pod_ready.go:82] duration metric: took 8.667996ms for pod "coredns-7c65d6cfc9-x8gvd" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.927014  134197 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.931410  134197 pod_ready.go:93] pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.931429  134197 pod_ready.go:82] duration metric: took 4.406844ms for pod "etcd-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.931437  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.939500  134197 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:46.939520  134197 pod_ready.go:82] duration metric: took 8.077556ms for pod "kube-apiserver-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:46.939529  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:47.945396  134197 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:47.945424  134197 pod_ready.go:82] duration metric: took 1.005888192s for pod "kube-controller-manager-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:47.945434  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.113116  134197 pod_ready.go:93] pod "kube-proxy-ff797" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:48.113139  134197 pod_ready.go:82] duration metric: took 167.697182ms for pod "kube-proxy-ff797" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.113152  134197 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.513307  134197 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace has status "Ready":"True"
	I1028 13:13:48.513333  134197 pod_ready.go:82] duration metric: took 400.171263ms for pod "kube-scheduler-default-k8s-diff-port-783661" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:48.513347  134197 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace to be "Ready" ...
	I1028 13:13:50.519958  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:53.019212  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:55.519405  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:13:58.020739  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:00.520634  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:03.020065  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:05.520194  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:08.021476  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:10.519420  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:12.519619  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:15.018939  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:17.019330  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:19.019515  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:21.518826  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:23.518973  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:25.519832  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:27.520330  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:30.019374  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:32.019553  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:34.520161  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:37.019196  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:39.019613  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:41.519249  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:44.019222  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:46.019428  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:48.021053  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:50.519494  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:52.519778  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:54.519959  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:57.019089  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:14:59.019844  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:01.519108  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:03.519666  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:06.019970  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:08.519009  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:10.519404  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:13.020229  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:15.519564  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:18.019894  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:20.520854  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:23.018865  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:25.019206  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:27.019806  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:29.020839  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:31.519390  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:34.019544  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:36.519511  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:39.022002  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:41.519283  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:44.019163  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:46.519668  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:49.019959  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	I1028 13:15:51.520134  134197 pod_ready.go:103] pod "metrics-server-6867b74b74-rkx62" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.816989543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121357816969642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=752b88c7-57df-4d49-b122-74c382ee6c9a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.817511104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee1d314a-b0c3-4328-a081-ccd1ba6e6e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.817597080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee1d314a-b0c3-4328-a081-ccd1ba6e6e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.817647307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ee1d314a-b0c3-4328-a081-ccd1ba6e6e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.846104956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d738bc3-fdb2-43b5-b412-5f07430b71b2 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.846186147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d738bc3-fdb2-43b5-b412-5f07430b71b2 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.847082415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9de8b3ea-f855-4838-902d-9e6ced8a4fba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.847462416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121357847442813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9de8b3ea-f855-4838-902d-9e6ced8a4fba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.847964029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f53839f-33ff-413f-a342-412126b0f918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.848033978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f53839f-33ff-413f-a342-412126b0f918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.848078742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4f53839f-33ff-413f-a342-412126b0f918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.875709910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f06f615-7ac6-4e69-b0c1-ca40fe288ada name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.875867346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f06f615-7ac6-4e69-b0c1-ca40fe288ada name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.876707988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74328b42-f9cb-4528-b6c3-0101a5b6d98b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.877127276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121357877109784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74328b42-f9cb-4528-b6c3-0101a5b6d98b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.877630610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aa22892-3cb8-4a6f-8d20-01928a0dd006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.877695848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aa22892-3cb8-4a6f-8d20-01928a0dd006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.877750697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6aa22892-3cb8-4a6f-8d20-01928a0dd006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.909825553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a8ffc6b-bdbd-45e9-8bf0-100959feda73 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.909908017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a8ffc6b-bdbd-45e9-8bf0-100959feda73 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.910764479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6faa9e19-7a8a-4317-b8ec-a3ff6dc71347 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.911180238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121357911157595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6faa9e19-7a8a-4317-b8ec-a3ff6dc71347 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.911835629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a239798-c1fe-4959-b97d-9a657a46a6ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.911899353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a239798-c1fe-4959-b97d-9a657a46a6ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:15:57 old-k8s-version-733464 crio[631]: time="2024-10-28 13:15:57.911942515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4a239798-c1fe-4959-b97d-9a657a46a6ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053749] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037595] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.915680] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.519083] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct28 12:57] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.070642] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061498] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.188572] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.147125] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361094] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069839] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.013009] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.125884] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 13:01] systemd-fstab-generator[5140]: Ignoring "noauto" option for root device
	[Oct28 13:03] systemd-fstab-generator[5420]: Ignoring "noauto" option for root device
	[  +0.055820] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:15:58 up 19 min,  0 users,  load average: 0.00, 0.00, 0.01
	Linux old-k8s-version-733464 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000095740, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b40ea0, 0x24, 0x0, ...)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: net.(*Dialer).DialContext(0xc000c95860, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b40ea0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000cac100, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b40ea0, 0x24, 0x60, 0x7f317b6ee108, 0x118, ...)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: net/http.(*Transport).dial(0xc00058f7c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b40ea0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: net/http.(*Transport).dialConn(0xc00058f7c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc00086dce0, 0x5, 0xc000b40ea0, 0x24, 0x0, 0xc00090e120, ...)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: net/http.(*Transport).dialConnFor(0xc00058f7c0, 0xc00002bc30)
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]: created by net/http.(*Transport).queueForDial
	Oct 28 13:15:56 old-k8s-version-733464 kubelet[6865]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 28 13:15:56 old-k8s-version-733464 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 13:15:56 old-k8s-version-733464 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 13:15:57 old-k8s-version-733464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Oct 28 13:15:57 old-k8s-version-733464 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 13:15:57 old-k8s-version-733464 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 13:15:57 old-k8s-version-733464 kubelet[6894]: I1028 13:15:57.666411    6894 server.go:416] Version: v1.20.0
	Oct 28 13:15:57 old-k8s-version-733464 kubelet[6894]: I1028 13:15:57.666767    6894 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 13:15:57 old-k8s-version-733464 kubelet[6894]: I1028 13:15:57.668606    6894 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 13:15:57 old-k8s-version-733464 kubelet[6894]: I1028 13:15:57.669509    6894 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 28 13:15:57 old-k8s-version-733464 kubelet[6894]: W1028 13:15:57.669522    6894 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 2 (218.962562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-733464" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (107.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (541.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:27:00.630498953 +0000 UTC m=+6602.196070385
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-783661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-783661 logs -n 25: (1.058614563s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-297280 sudo iptables                       | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo docker                         | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo find                           | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo crio                           | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-297280                                     | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:20:59
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:20:59.730326  146109 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:20:59.730428  146109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:20:59.730440  146109 out.go:358] Setting ErrFile to fd 2...
	I1028 13:20:59.730446  146109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:20:59.730641  146109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:20:59.731248  146109 out.go:352] Setting JSON to false
	I1028 13:20:59.732351  146109 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11010,"bootTime":1730110650,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:20:59.732464  146109 start.go:139] virtualization: kvm guest
	I1028 13:20:59.734383  146109 out.go:177] * [bridge-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:20:59.736004  146109 notify.go:220] Checking for updates...
	I1028 13:20:59.736029  146109 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:20:59.737281  146109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:20:59.738577  146109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:20:59.740045  146109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:20:59.741394  146109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:20:59.742734  146109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:20:59.744632  146109 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.744786  146109 config.go:182] Loaded profile config "enable-default-cni-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.744911  146109 config.go:182] Loaded profile config "flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.745017  146109 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:20:59.784227  146109 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 13:20:59.785566  146109 start.go:297] selected driver: kvm2
	I1028 13:20:59.785586  146109 start.go:901] validating driver "kvm2" against <nil>
	I1028 13:20:59.785601  146109 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:20:59.786595  146109 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:20:59.786700  146109 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:20:59.802632  146109 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:20:59.802699  146109 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 13:20:59.803057  146109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:20:59.803107  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:20:59.803115  146109 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 13:20:59.803185  146109 start.go:340] cluster config:
	{Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:20:59.803353  146109 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:20:59.805029  146109 out.go:177] * Starting "bridge-297280" primary control-plane node in "bridge-297280" cluster
	I1028 13:20:59.806163  146109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:20:59.806220  146109 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:20:59.806234  146109 cache.go:56] Caching tarball of preloaded images
	I1028 13:20:59.806342  146109 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:20:59.806357  146109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:20:59.806493  146109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json ...
	I1028 13:20:59.806522  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json: {Name:mkf499151a7940cb7d6b517784be2ec3ae5a19ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:20:59.806718  146109 start.go:360] acquireMachinesLock for bridge-297280: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:20:59.806773  146109 start.go:364] duration metric: took 32.091µs to acquireMachinesLock for "bridge-297280"
	I1028 13:20:59.806799  146109 start.go:93] Provisioning new machine with config: &{Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:20:59.806896  146109 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 13:21:01.158893  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:03.162269  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:20:59.809646  146109 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 13:20:59.809832  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:20:59.809897  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:20:59.826229  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I1028 13:20:59.826822  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:20:59.827504  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:20:59.827533  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:20:59.827948  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:20:59.828171  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:20:59.828354  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:20:59.828544  146109 start.go:159] libmachine.API.Create for "bridge-297280" (driver="kvm2")
	I1028 13:20:59.828578  146109 client.go:168] LocalClient.Create starting
	I1028 13:20:59.828618  146109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 13:20:59.828661  146109 main.go:141] libmachine: Decoding PEM data...
	I1028 13:20:59.828694  146109 main.go:141] libmachine: Parsing certificate...
	I1028 13:20:59.828758  146109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 13:20:59.828786  146109 main.go:141] libmachine: Decoding PEM data...
	I1028 13:20:59.828802  146109 main.go:141] libmachine: Parsing certificate...
	I1028 13:20:59.828841  146109 main.go:141] libmachine: Running pre-create checks...
	I1028 13:20:59.828861  146109 main.go:141] libmachine: (bridge-297280) Calling .PreCreateCheck
	I1028 13:20:59.829331  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:20:59.829797  146109 main.go:141] libmachine: Creating machine...
	I1028 13:20:59.829813  146109 main.go:141] libmachine: (bridge-297280) Calling .Create
	I1028 13:20:59.829970  146109 main.go:141] libmachine: (bridge-297280) Creating KVM machine...
	I1028 13:20:59.831375  146109 main.go:141] libmachine: (bridge-297280) DBG | found existing default KVM network
	I1028 13:20:59.832954  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:20:59.832767  146132 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211820}
	I1028 13:20:59.832975  146109 main.go:141] libmachine: (bridge-297280) DBG | created network xml: 
	I1028 13:20:59.832986  146109 main.go:141] libmachine: (bridge-297280) DBG | <network>
	I1028 13:20:59.832994  146109 main.go:141] libmachine: (bridge-297280) DBG |   <name>mk-bridge-297280</name>
	I1028 13:20:59.833003  146109 main.go:141] libmachine: (bridge-297280) DBG |   <dns enable='no'/>
	I1028 13:20:59.833013  146109 main.go:141] libmachine: (bridge-297280) DBG |   
	I1028 13:20:59.833025  146109 main.go:141] libmachine: (bridge-297280) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 13:20:59.833036  146109 main.go:141] libmachine: (bridge-297280) DBG |     <dhcp>
	I1028 13:20:59.833094  146109 main.go:141] libmachine: (bridge-297280) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 13:20:59.833124  146109 main.go:141] libmachine: (bridge-297280) DBG |     </dhcp>
	I1028 13:20:59.833139  146109 main.go:141] libmachine: (bridge-297280) DBG |   </ip>
	I1028 13:20:59.833148  146109 main.go:141] libmachine: (bridge-297280) DBG |   
	I1028 13:20:59.833156  146109 main.go:141] libmachine: (bridge-297280) DBG | </network>
	I1028 13:20:59.833161  146109 main.go:141] libmachine: (bridge-297280) DBG | 
	I1028 13:20:59.838256  146109 main.go:141] libmachine: (bridge-297280) DBG | trying to create private KVM network mk-bridge-297280 192.168.39.0/24...
	I1028 13:20:59.924254  146109 main.go:141] libmachine: (bridge-297280) DBG | private KVM network mk-bridge-297280 192.168.39.0/24 created
	I1028 13:20:59.924284  146109 main.go:141] libmachine: (bridge-297280) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 ...
	I1028 13:20:59.924298  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:20:59.924205  146132 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:20:59.924376  146109 main.go:141] libmachine: (bridge-297280) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 13:20:59.924413  146109 main.go:141] libmachine: (bridge-297280) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 13:21:00.208472  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.208340  146132 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa...
	I1028 13:21:00.328153  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.327989  146132 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/bridge-297280.rawdisk...
	I1028 13:21:00.328184  146109 main.go:141] libmachine: (bridge-297280) DBG | Writing magic tar header
	I1028 13:21:00.328197  146109 main.go:141] libmachine: (bridge-297280) DBG | Writing SSH key tar header
	I1028 13:21:00.328734  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.328492  146132 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 ...
	I1028 13:21:00.329510  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280
	I1028 13:21:00.329553  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 13:21:00.329568  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 (perms=drwx------)
	I1028 13:21:00.329649  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 13:21:00.329665  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:21:00.329680  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 13:21:00.329690  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 13:21:00.329722  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins
	I1028 13:21:00.329738  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 13:21:00.329754  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 13:21:00.329787  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 13:21:00.329799  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home
	I1028 13:21:00.329814  146109 main.go:141] libmachine: (bridge-297280) DBG | Skipping /home - not owner
	I1028 13:21:00.329833  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 13:21:00.329859  146109 main.go:141] libmachine: (bridge-297280) Creating domain...
	I1028 13:21:00.330951  146109 main.go:141] libmachine: (bridge-297280) define libvirt domain using xml: 
	I1028 13:21:00.330976  146109 main.go:141] libmachine: (bridge-297280) <domain type='kvm'>
	I1028 13:21:00.330996  146109 main.go:141] libmachine: (bridge-297280)   <name>bridge-297280</name>
	I1028 13:21:00.331011  146109 main.go:141] libmachine: (bridge-297280)   <memory unit='MiB'>3072</memory>
	I1028 13:21:00.331023  146109 main.go:141] libmachine: (bridge-297280)   <vcpu>2</vcpu>
	I1028 13:21:00.331036  146109 main.go:141] libmachine: (bridge-297280)   <features>
	I1028 13:21:00.331048  146109 main.go:141] libmachine: (bridge-297280)     <acpi/>
	I1028 13:21:00.331054  146109 main.go:141] libmachine: (bridge-297280)     <apic/>
	I1028 13:21:00.331062  146109 main.go:141] libmachine: (bridge-297280)     <pae/>
	I1028 13:21:00.331068  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331073  146109 main.go:141] libmachine: (bridge-297280)   </features>
	I1028 13:21:00.331077  146109 main.go:141] libmachine: (bridge-297280)   <cpu mode='host-passthrough'>
	I1028 13:21:00.331081  146109 main.go:141] libmachine: (bridge-297280)   
	I1028 13:21:00.331085  146109 main.go:141] libmachine: (bridge-297280)   </cpu>
	I1028 13:21:00.331089  146109 main.go:141] libmachine: (bridge-297280)   <os>
	I1028 13:21:00.331093  146109 main.go:141] libmachine: (bridge-297280)     <type>hvm</type>
	I1028 13:21:00.331098  146109 main.go:141] libmachine: (bridge-297280)     <boot dev='cdrom'/>
	I1028 13:21:00.331102  146109 main.go:141] libmachine: (bridge-297280)     <boot dev='hd'/>
	I1028 13:21:00.331110  146109 main.go:141] libmachine: (bridge-297280)     <bootmenu enable='no'/>
	I1028 13:21:00.331115  146109 main.go:141] libmachine: (bridge-297280)   </os>
	I1028 13:21:00.331123  146109 main.go:141] libmachine: (bridge-297280)   <devices>
	I1028 13:21:00.331133  146109 main.go:141] libmachine: (bridge-297280)     <disk type='file' device='cdrom'>
	I1028 13:21:00.331145  146109 main.go:141] libmachine: (bridge-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/boot2docker.iso'/>
	I1028 13:21:00.331169  146109 main.go:141] libmachine: (bridge-297280)       <target dev='hdc' bus='scsi'/>
	I1028 13:21:00.331180  146109 main.go:141] libmachine: (bridge-297280)       <readonly/>
	I1028 13:21:00.331186  146109 main.go:141] libmachine: (bridge-297280)     </disk>
	I1028 13:21:00.331232  146109 main.go:141] libmachine: (bridge-297280)     <disk type='file' device='disk'>
	I1028 13:21:00.331272  146109 main.go:141] libmachine: (bridge-297280)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 13:21:00.331311  146109 main.go:141] libmachine: (bridge-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/bridge-297280.rawdisk'/>
	I1028 13:21:00.331336  146109 main.go:141] libmachine: (bridge-297280)       <target dev='hda' bus='virtio'/>
	I1028 13:21:00.331349  146109 main.go:141] libmachine: (bridge-297280)     </disk>
	I1028 13:21:00.331360  146109 main.go:141] libmachine: (bridge-297280)     <interface type='network'>
	I1028 13:21:00.331369  146109 main.go:141] libmachine: (bridge-297280)       <source network='mk-bridge-297280'/>
	I1028 13:21:00.331379  146109 main.go:141] libmachine: (bridge-297280)       <model type='virtio'/>
	I1028 13:21:00.331390  146109 main.go:141] libmachine: (bridge-297280)     </interface>
	I1028 13:21:00.331400  146109 main.go:141] libmachine: (bridge-297280)     <interface type='network'>
	I1028 13:21:00.331411  146109 main.go:141] libmachine: (bridge-297280)       <source network='default'/>
	I1028 13:21:00.331421  146109 main.go:141] libmachine: (bridge-297280)       <model type='virtio'/>
	I1028 13:21:00.331430  146109 main.go:141] libmachine: (bridge-297280)     </interface>
	I1028 13:21:00.331445  146109 main.go:141] libmachine: (bridge-297280)     <serial type='pty'>
	I1028 13:21:00.331455  146109 main.go:141] libmachine: (bridge-297280)       <target port='0'/>
	I1028 13:21:00.331462  146109 main.go:141] libmachine: (bridge-297280)     </serial>
	I1028 13:21:00.331472  146109 main.go:141] libmachine: (bridge-297280)     <console type='pty'>
	I1028 13:21:00.331480  146109 main.go:141] libmachine: (bridge-297280)       <target type='serial' port='0'/>
	I1028 13:21:00.331498  146109 main.go:141] libmachine: (bridge-297280)     </console>
	I1028 13:21:00.331513  146109 main.go:141] libmachine: (bridge-297280)     <rng model='virtio'>
	I1028 13:21:00.331526  146109 main.go:141] libmachine: (bridge-297280)       <backend model='random'>/dev/random</backend>
	I1028 13:21:00.331547  146109 main.go:141] libmachine: (bridge-297280)     </rng>
	I1028 13:21:00.331558  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331568  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331587  146109 main.go:141] libmachine: (bridge-297280)   </devices>
	I1028 13:21:00.331604  146109 main.go:141] libmachine: (bridge-297280) </domain>
	I1028 13:21:00.331647  146109 main.go:141] libmachine: (bridge-297280) 
	I1028 13:21:00.336655  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:9d:94:98 in network default
	I1028 13:21:00.337380  146109 main.go:141] libmachine: (bridge-297280) Ensuring networks are active...
	I1028 13:21:00.337397  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:00.338277  146109 main.go:141] libmachine: (bridge-297280) Ensuring network default is active
	I1028 13:21:00.338623  146109 main.go:141] libmachine: (bridge-297280) Ensuring network mk-bridge-297280 is active
	I1028 13:21:00.339170  146109 main.go:141] libmachine: (bridge-297280) Getting domain xml...
	I1028 13:21:00.339967  146109 main.go:141] libmachine: (bridge-297280) Creating domain...
	I1028 13:21:01.636379  146109 main.go:141] libmachine: (bridge-297280) Waiting to get IP...
	I1028 13:21:01.637495  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:01.638088  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:01.638116  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:01.638063  146132 retry.go:31] will retry after 289.404152ms: waiting for machine to come up
	I1028 13:21:01.929711  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:01.930295  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:01.930322  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:01.930260  146132 retry.go:31] will retry after 278.924935ms: waiting for machine to come up
	I1028 13:21:02.210852  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:02.211341  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:02.211371  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:02.211287  146132 retry.go:31] will retry after 333.293065ms: waiting for machine to come up
	I1028 13:21:02.545917  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:02.546514  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:02.546542  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:02.546454  146132 retry.go:31] will retry after 500.258922ms: waiting for machine to come up
	I1028 13:21:03.047994  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:03.048535  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:03.048568  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:03.048476  146132 retry.go:31] will retry after 538.451624ms: waiting for machine to come up
	I1028 13:21:03.588801  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:03.589368  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:03.589400  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:03.589323  146132 retry.go:31] will retry after 596.904677ms: waiting for machine to come up
	I1028 13:21:04.188066  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:04.188678  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:04.188713  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:04.188606  146132 retry.go:31] will retry after 1.087456635s: waiting for machine to come up
	I1028 13:21:06.135317  144327 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 13:21:06.135411  144327 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:21:06.135531  144327 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:21:06.135699  144327 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:21:06.135878  144327 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 13:21:06.135990  144327 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:21:06.137628  144327 out.go:235]   - Generating certificates and keys ...
	I1028 13:21:06.137735  144327 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:21:06.137850  144327 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:21:06.137963  144327 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 13:21:06.138080  144327 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 13:21:06.138153  144327 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 13:21:06.138210  144327 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 13:21:06.138286  144327 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 13:21:06.138484  144327 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-297280 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1028 13:21:06.138572  144327 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 13:21:06.138705  144327 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-297280 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1028 13:21:06.138791  144327 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 13:21:06.138864  144327 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 13:21:06.138924  144327 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 13:21:06.138996  144327 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:21:06.139070  144327 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:21:06.139145  144327 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 13:21:06.139233  144327 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:21:06.139327  144327 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:21:06.139401  144327 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:21:06.139511  144327 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:21:06.139606  144327 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:21:06.140986  144327 out.go:235]   - Booting up control plane ...
	I1028 13:21:06.141117  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:21:06.141236  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:21:06.141347  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:21:06.141513  144327 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:21:06.141639  144327 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:21:06.141709  144327 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:21:06.141906  144327 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 13:21:06.142043  144327 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 13:21:06.142121  144327 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.346899ms
	I1028 13:21:06.142201  144327 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 13:21:06.142264  144327 kubeadm.go:310] [api-check] The API server is healthy after 5.501976932s
	I1028 13:21:06.142357  144327 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 13:21:06.142463  144327 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 13:21:06.142513  144327 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 13:21:06.142670  144327 kubeadm.go:310] [mark-control-plane] Marking the node flannel-297280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 13:21:06.142722  144327 kubeadm.go:310] [bootstrap-token] Using token: 78vwrn.m8eixtl0knqeesha
	I1028 13:21:06.144121  144327 out.go:235]   - Configuring RBAC rules ...
	I1028 13:21:06.144214  144327 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 13:21:06.144286  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 13:21:06.144482  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 13:21:06.144698  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 13:21:06.144907  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 13:21:06.145039  144327 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 13:21:06.145209  144327 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 13:21:06.145282  144327 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 13:21:06.145349  144327 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 13:21:06.145373  144327 kubeadm.go:310] 
	I1028 13:21:06.145462  144327 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 13:21:06.145474  144327 kubeadm.go:310] 
	I1028 13:21:06.145583  144327 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 13:21:06.145592  144327 kubeadm.go:310] 
	I1028 13:21:06.145632  144327 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 13:21:06.145731  144327 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 13:21:06.145807  144327 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 13:21:06.145819  144327 kubeadm.go:310] 
	I1028 13:21:06.145890  144327 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 13:21:06.145903  144327 kubeadm.go:310] 
	I1028 13:21:06.145978  144327 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 13:21:06.145987  144327 kubeadm.go:310] 
	I1028 13:21:06.146068  144327 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 13:21:06.146167  144327 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 13:21:06.146264  144327 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 13:21:06.146273  144327 kubeadm.go:310] 
	I1028 13:21:06.146356  144327 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 13:21:06.146427  144327 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 13:21:06.146446  144327 kubeadm.go:310] 
	I1028 13:21:06.146550  144327 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 78vwrn.m8eixtl0knqeesha \
	I1028 13:21:06.146676  144327 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 13:21:06.146697  144327 kubeadm.go:310] 	--control-plane 
	I1028 13:21:06.146703  144327 kubeadm.go:310] 
	I1028 13:21:06.146771  144327 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 13:21:06.146777  144327 kubeadm.go:310] 
	I1028 13:21:06.146847  144327 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 78vwrn.m8eixtl0knqeesha \
	I1028 13:21:06.146990  144327 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 13:21:06.147004  144327 cni.go:84] Creating CNI manager for "flannel"
	I1028 13:21:06.148709  144327 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1028 13:21:06.150058  144327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 13:21:06.155609  144327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 13:21:06.155625  144327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1028 13:21:06.173094  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 13:21:06.540077  144327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:21:06.540175  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:06.540175  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-297280 minikube.k8s.io/updated_at=2024_10_28T13_21_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=flannel-297280 minikube.k8s.io/primary=true
	I1028 13:21:06.573924  144327 ops.go:34] apiserver oom_adj: -16
	I1028 13:21:06.688052  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:07.188886  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:07.688772  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:08.188516  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:05.658897  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:08.158613  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:05.277513  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:05.278040  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:05.278069  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:05.277985  146132 retry.go:31] will retry after 905.19327ms: waiting for machine to come up
	I1028 13:21:06.184909  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:06.185361  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:06.185389  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:06.185308  146132 retry.go:31] will retry after 1.852852207s: waiting for machine to come up
	I1028 13:21:08.040431  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:08.041024  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:08.041052  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:08.040971  146132 retry.go:31] will retry after 1.93654077s: waiting for machine to come up
	I1028 13:21:08.688497  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.188956  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.688331  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.800998  144327 kubeadm.go:1113] duration metric: took 3.260888047s to wait for elevateKubeSystemPrivileges
	I1028 13:21:09.801037  144327 kubeadm.go:394] duration metric: took 14.575440018s to StartCluster
	I1028 13:21:09.801066  144327 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:09.801177  144327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:21:09.802895  144327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:09.803165  144327 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:21:09.803283  144327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 13:21:09.803534  144327 config.go:182] Loaded profile config "flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:09.803585  144327 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:21:09.803689  144327 addons.go:69] Setting storage-provisioner=true in profile "flannel-297280"
	I1028 13:21:09.803697  144327 addons.go:69] Setting default-storageclass=true in profile "flannel-297280"
	I1028 13:21:09.803708  144327 addons.go:234] Setting addon storage-provisioner=true in "flannel-297280"
	I1028 13:21:09.803728  144327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-297280"
	I1028 13:21:09.803739  144327 host.go:66] Checking if "flannel-297280" exists ...
	I1028 13:21:09.804164  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.804203  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.804218  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.804244  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.805686  144327 out.go:177] * Verifying Kubernetes components...
	I1028 13:21:09.810212  144327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:09.822883  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I1028 13:21:09.823367  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.823672  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I1028 13:21:09.824030  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.824089  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.824131  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.824475  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.824789  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.824813  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.825096  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.825142  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.825266  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.825432  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.829481  144327 addons.go:234] Setting addon default-storageclass=true in "flannel-297280"
	I1028 13:21:09.829545  144327 host.go:66] Checking if "flannel-297280" exists ...
	I1028 13:21:09.829946  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.829968  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.843725  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I1028 13:21:09.844248  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.844730  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.844746  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.845086  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.845254  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.847092  144327 main.go:141] libmachine: (flannel-297280) Calling .DriverName
	I1028 13:21:09.848848  144327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:21:09.849992  144327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:09.850014  144327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:21:09.850031  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHHostname
	I1028 13:21:09.853223  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1028 13:21:09.853374  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.853801  144327 main.go:141] libmachine: (flannel-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:99:5f", ip: ""} in network mk-flannel-297280: {Iface:virbr2 ExpiryTime:2024-10-28 14:20:40 +0000 UTC Type:0 Mac:52:54:00:81:99:5f Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:flannel-297280 Clientid:01:52:54:00:81:99:5f}
	I1028 13:21:09.853824  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined IP address 192.168.50.159 and MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.854027  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.854075  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHPort
	I1028 13:21:09.854529  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.854545  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.854548  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHKeyPath
	I1028 13:21:09.854720  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHUsername
	I1028 13:21:09.854858  144327 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/flannel-297280/id_rsa Username:docker}
	I1028 13:21:09.855038  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.855692  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.855725  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.871250  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1028 13:21:09.871800  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.872425  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.872444  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.872804  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.873039  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.874977  144327 main.go:141] libmachine: (flannel-297280) Calling .DriverName
	I1028 13:21:09.875199  144327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:09.875219  144327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:21:09.875235  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHHostname
	I1028 13:21:09.878037  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.878462  144327 main.go:141] libmachine: (flannel-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:99:5f", ip: ""} in network mk-flannel-297280: {Iface:virbr2 ExpiryTime:2024-10-28 14:20:40 +0000 UTC Type:0 Mac:52:54:00:81:99:5f Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:flannel-297280 Clientid:01:52:54:00:81:99:5f}
	I1028 13:21:09.878490  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined IP address 192.168.50.159 and MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.878631  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHPort
	I1028 13:21:09.878786  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHKeyPath
	I1028 13:21:09.878885  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHUsername
	I1028 13:21:09.878970  144327 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/flannel-297280/id_rsa Username:docker}
	I1028 13:21:10.029174  144327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:10.029271  144327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 13:21:10.055539  144327 node_ready.go:35] waiting up to 15m0s for node "flannel-297280" to be "Ready" ...
	I1028 13:21:10.216361  144327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:10.250575  144327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:10.590461  144327 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1028 13:21:11.009084  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009117  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009134  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009147  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009425  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009450  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009460  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009466  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009563  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.009593  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009612  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009631  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009642  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009799  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.009833  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009849  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009879  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009898  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009913  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.021965  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.021986  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.022261  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.022280  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.022280  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.024652  144327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 13:21:11.025756  144327 addons.go:510] duration metric: took 1.222167964s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 13:21:11.096842  144327 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-297280" context rescaled to 1 replicas
	I1028 13:21:12.059123  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:10.160403  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:12.657934  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:13.658140  142406 pod_ready.go:93] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.658165  142406 pod_ready.go:82] duration metric: took 32.506180506s for pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.658179  142406 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.660114  142406 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s8gk8" not found
	I1028 13:21:13.660142  142406 pod_ready.go:82] duration metric: took 1.955168ms for pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace to be "Ready" ...
	E1028 13:21:13.660154  142406 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s8gk8" not found
	I1028 13:21:13.660163  142406 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.665007  142406 pod_ready.go:93] pod "etcd-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.665032  142406 pod_ready.go:82] duration metric: took 4.858691ms for pod "etcd-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.665043  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.670487  142406 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.670508  142406 pod_ready.go:82] duration metric: took 5.45898ms for pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.670517  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.675354  142406 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.675377  142406 pod_ready.go:82] duration metric: took 4.853628ms for pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.675389  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7dg4r" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.855393  142406 pod_ready.go:93] pod "kube-proxy-7dg4r" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.855417  142406 pod_ready.go:82] duration metric: took 180.02029ms for pod "kube-proxy-7dg4r" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.855428  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:09.978929  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:09.979569  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:09.979603  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:09.979528  146132 retry.go:31] will retry after 2.517726332s: waiting for machine to come up
	I1028 13:21:12.499175  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:12.499651  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:12.499681  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:12.499584  146132 retry.go:31] will retry after 3.287997939s: waiting for machine to come up
	I1028 13:21:14.255590  142406 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:14.255622  142406 pod_ready.go:82] duration metric: took 400.186438ms for pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:14.255650  142406 pod_ready.go:39] duration metric: took 33.116717205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:14.255671  142406 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:21:14.255732  142406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:21:14.271555  142406 api_server.go:72] duration metric: took 34.034379367s to wait for apiserver process to appear ...
	I1028 13:21:14.271577  142406 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:21:14.271596  142406 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I1028 13:21:14.275775  142406 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I1028 13:21:14.276809  142406 api_server.go:141] control plane version: v1.31.2
	I1028 13:21:14.276829  142406 api_server.go:131] duration metric: took 5.245547ms to wait for apiserver health ...
	I1028 13:21:14.276838  142406 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:21:14.458435  142406 system_pods.go:59] 7 kube-system pods found
	I1028 13:21:14.458464  142406 system_pods.go:61] "coredns-7c65d6cfc9-jdq8d" [c8370b8b-04a0-4e84-b64b-08c166f3fc3b] Running
	I1028 13:21:14.458469  142406 system_pods.go:61] "etcd-enable-default-cni-297280" [0aee5b6e-8399-4fc4-ac09-e43f0ae2f755] Running
	I1028 13:21:14.458473  142406 system_pods.go:61] "kube-apiserver-enable-default-cni-297280" [732d43de-3ced-43d0-baa1-9bfcb2ebc808] Running
	I1028 13:21:14.458476  142406 system_pods.go:61] "kube-controller-manager-enable-default-cni-297280" [9e81877e-0de0-448b-9a73-ed546c6c7640] Running
	I1028 13:21:14.458479  142406 system_pods.go:61] "kube-proxy-7dg4r" [6743c3c5-5403-4ec7-b862-6dfb58bd7c39] Running
	I1028 13:21:14.458483  142406 system_pods.go:61] "kube-scheduler-enable-default-cni-297280" [5629459b-6e6a-45fa-8e01-db534d84bf0a] Running
	I1028 13:21:14.458486  142406 system_pods.go:61] "storage-provisioner" [939c3647-0f0f-4fc4-ab85-2abb6c2c2256] Running
	I1028 13:21:14.458497  142406 system_pods.go:74] duration metric: took 181.647136ms to wait for pod list to return data ...
	I1028 13:21:14.458507  142406 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:21:14.655550  142406 default_sa.go:45] found service account: "default"
	I1028 13:21:14.655578  142406 default_sa.go:55] duration metric: took 197.064132ms for default service account to be created ...
	I1028 13:21:14.655592  142406 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:21:14.857890  142406 system_pods.go:86] 7 kube-system pods found
	I1028 13:21:14.857918  142406 system_pods.go:89] "coredns-7c65d6cfc9-jdq8d" [c8370b8b-04a0-4e84-b64b-08c166f3fc3b] Running
	I1028 13:21:14.857923  142406 system_pods.go:89] "etcd-enable-default-cni-297280" [0aee5b6e-8399-4fc4-ac09-e43f0ae2f755] Running
	I1028 13:21:14.857927  142406 system_pods.go:89] "kube-apiserver-enable-default-cni-297280" [732d43de-3ced-43d0-baa1-9bfcb2ebc808] Running
	I1028 13:21:14.857931  142406 system_pods.go:89] "kube-controller-manager-enable-default-cni-297280" [9e81877e-0de0-448b-9a73-ed546c6c7640] Running
	I1028 13:21:14.857934  142406 system_pods.go:89] "kube-proxy-7dg4r" [6743c3c5-5403-4ec7-b862-6dfb58bd7c39] Running
	I1028 13:21:14.857938  142406 system_pods.go:89] "kube-scheduler-enable-default-cni-297280" [5629459b-6e6a-45fa-8e01-db534d84bf0a] Running
	I1028 13:21:14.857941  142406 system_pods.go:89] "storage-provisioner" [939c3647-0f0f-4fc4-ab85-2abb6c2c2256] Running
	I1028 13:21:14.857948  142406 system_pods.go:126] duration metric: took 202.349362ms to wait for k8s-apps to be running ...
	I1028 13:21:14.857961  142406 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:21:14.858012  142406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:21:14.872839  142406 system_svc.go:56] duration metric: took 14.873794ms WaitForService to wait for kubelet
	I1028 13:21:14.872868  142406 kubeadm.go:582] duration metric: took 34.635694617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:21:14.872894  142406 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:21:15.056821  142406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:21:15.056850  142406 node_conditions.go:123] node cpu capacity is 2
	I1028 13:21:15.056864  142406 node_conditions.go:105] duration metric: took 183.963126ms to run NodePressure ...
	I1028 13:21:15.056879  142406 start.go:241] waiting for startup goroutines ...
	I1028 13:21:15.056888  142406 start.go:246] waiting for cluster config update ...
	I1028 13:21:15.056902  142406 start.go:255] writing updated cluster config ...
	I1028 13:21:15.057178  142406 ssh_runner.go:195] Run: rm -f paused
	I1028 13:21:15.106793  142406 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:21:15.108998  142406 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-297280" cluster and "default" namespace by default
	I1028 13:21:14.559385  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:17.061958  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:15.788817  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:15.789337  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:15.789369  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:15.789264  146132 retry.go:31] will retry after 3.901879397s: waiting for machine to come up
	I1028 13:21:19.693541  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:19.694044  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:19.694068  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:19.693987  146132 retry.go:31] will retry after 4.556264872s: waiting for machine to come up
	I1028 13:21:18.558736  144327 node_ready.go:49] node "flannel-297280" has status "Ready":"True"
	I1028 13:21:18.558768  144327 node_ready.go:38] duration metric: took 8.503177167s for node "flannel-297280" to be "Ready" ...
	I1028 13:21:18.558782  144327 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:18.567049  144327 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:20.574718  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:23.073114  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:24.253019  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.253479  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has current primary IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.253497  146109 main.go:141] libmachine: (bridge-297280) Found IP for machine: 192.168.39.112
	I1028 13:21:24.253513  146109 main.go:141] libmachine: (bridge-297280) Reserving static IP address...
	I1028 13:21:24.253928  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find host DHCP lease matching {name: "bridge-297280", mac: "52:54:00:d9:5d:00", ip: "192.168.39.112"} in network mk-bridge-297280
	I1028 13:21:24.329131  146109 main.go:141] libmachine: (bridge-297280) DBG | Getting to WaitForSSH function...
	I1028 13:21:24.329161  146109 main.go:141] libmachine: (bridge-297280) Reserved static IP address: 192.168.39.112
	I1028 13:21:24.329173  146109 main.go:141] libmachine: (bridge-297280) Waiting for SSH to be available...
	I1028 13:21:24.332308  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.332773  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.332802  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.332929  146109 main.go:141] libmachine: (bridge-297280) DBG | Using SSH client type: external
	I1028 13:21:24.332957  146109 main.go:141] libmachine: (bridge-297280) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa (-rw-------)
	I1028 13:21:24.333010  146109 main.go:141] libmachine: (bridge-297280) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 13:21:24.333034  146109 main.go:141] libmachine: (bridge-297280) DBG | About to run SSH command:
	I1028 13:21:24.333051  146109 main.go:141] libmachine: (bridge-297280) DBG | exit 0
	I1028 13:21:24.455203  146109 main.go:141] libmachine: (bridge-297280) DBG | SSH cmd err, output: <nil>: 
	I1028 13:21:24.455478  146109 main.go:141] libmachine: (bridge-297280) KVM machine creation complete!
	I1028 13:21:24.455756  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:21:24.456324  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:24.456487  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:24.456675  146109 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 13:21:24.456692  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:24.458016  146109 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 13:21:24.458028  146109 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 13:21:24.458033  146109 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 13:21:24.458038  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.460510  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.460899  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.460922  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.461102  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.461248  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.461427  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.461561  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.461715  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.461917  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.461928  146109 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 13:21:24.562870  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:21:24.562896  146109 main.go:141] libmachine: Detecting the provisioner...
	I1028 13:21:24.562903  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.565856  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.566275  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.566302  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.566485  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.566704  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.566898  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.567051  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.567222  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.567448  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.567463  146109 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 13:21:24.667804  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 13:21:24.667888  146109 main.go:141] libmachine: found compatible host: buildroot
	I1028 13:21:24.667898  146109 main.go:141] libmachine: Provisioning with buildroot...
	I1028 13:21:24.667905  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.668136  146109 buildroot.go:166] provisioning hostname "bridge-297280"
	I1028 13:21:24.668178  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.668373  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.671143  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.671526  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.671566  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.671676  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.671850  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.672013  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.672134  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.672297  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.672544  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.672562  146109 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-297280 && echo "bridge-297280" | sudo tee /etc/hostname
	I1028 13:21:24.785381  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-297280
	
	I1028 13:21:24.785409  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.788208  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.788581  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.788620  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.788718  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.788896  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.789033  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.789163  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.789349  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.789565  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.789583  146109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-297280' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-297280/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-297280' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 13:21:24.895789  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:21:24.895821  146109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 13:21:24.895915  146109 buildroot.go:174] setting up certificates
	I1028 13:21:24.895928  146109 provision.go:84] configureAuth start
	I1028 13:21:24.895942  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.896238  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:24.898957  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.899338  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.899366  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.899492  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.901788  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.902139  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.902164  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.902290  146109 provision.go:143] copyHostCerts
	I1028 13:21:24.902380  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 13:21:24.902398  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 13:21:24.902478  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 13:21:24.902600  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 13:21:24.902611  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 13:21:24.902655  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 13:21:24.902744  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 13:21:24.902753  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 13:21:24.902788  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 13:21:24.902884  146109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.bridge-297280 san=[127.0.0.1 192.168.39.112 bridge-297280 localhost minikube]
	I1028 13:21:25.140172  146109 provision.go:177] copyRemoteCerts
	I1028 13:21:25.140236  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 13:21:25.140261  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.142733  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.143073  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.143097  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.143240  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.143457  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.143642  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.143765  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.221455  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 13:21:25.244215  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 13:21:25.268467  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 13:21:25.289137  146109 provision.go:87] duration metric: took 393.193977ms to configureAuth
	I1028 13:21:25.289160  146109 buildroot.go:189] setting minikube options for container-runtime
	I1028 13:21:25.289305  146109 config.go:182] Loaded profile config "bridge-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:25.289395  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.292192  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.292696  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.292726  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.292860  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.293050  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.293196  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.293335  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.293479  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:25.293711  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:25.293732  146109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 13:21:25.500053  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 13:21:25.500095  146109 main.go:141] libmachine: Checking connection to Docker...
	I1028 13:21:25.500106  146109 main.go:141] libmachine: (bridge-297280) Calling .GetURL
	I1028 13:21:25.501161  146109 main.go:141] libmachine: (bridge-297280) DBG | Using libvirt version 6000000
	I1028 13:21:25.503297  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.503698  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.503739  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.503853  146109 main.go:141] libmachine: Docker is up and running!
	I1028 13:21:25.503871  146109 main.go:141] libmachine: Reticulating splines...
	I1028 13:21:25.503881  146109 client.go:171] duration metric: took 25.675293626s to LocalClient.Create
	I1028 13:21:25.503909  146109 start.go:167] duration metric: took 25.675366229s to libmachine.API.Create "bridge-297280"
	I1028 13:21:25.503922  146109 start.go:293] postStartSetup for "bridge-297280" (driver="kvm2")
	I1028 13:21:25.503935  146109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 13:21:25.503956  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.504185  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 13:21:25.504229  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.506718  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.507089  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.507115  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.507257  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.507422  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.507564  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.507729  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.585186  146109 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 13:21:25.589181  146109 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 13:21:25.589204  146109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 13:21:25.589261  146109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 13:21:25.589343  146109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 13:21:25.589440  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 13:21:25.599094  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:21:25.622862  146109 start.go:296] duration metric: took 118.923974ms for postStartSetup
	I1028 13:21:25.622922  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:21:25.623541  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:25.625958  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.626346  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.626380  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.626569  146109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json ...
	I1028 13:21:25.626775  146109 start.go:128] duration metric: took 25.819861563s to createHost
	I1028 13:21:25.626803  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.629111  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.629433  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.629463  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.629601  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.629768  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.629912  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.630087  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.630247  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:25.630428  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:25.630444  146109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 13:21:25.731923  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730121685.709299173
	
	I1028 13:21:25.731947  146109 fix.go:216] guest clock: 1730121685.709299173
	I1028 13:21:25.731957  146109 fix.go:229] Guest: 2024-10-28 13:21:25.709299173 +0000 UTC Remote: 2024-10-28 13:21:25.626789068 +0000 UTC m=+25.939003285 (delta=82.510105ms)
	I1028 13:21:25.732013  146109 fix.go:200] guest clock delta is within tolerance: 82.510105ms
	I1028 13:21:25.732025  146109 start.go:83] releasing machines lock for "bridge-297280", held for 25.925238039s
	I1028 13:21:25.732056  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.732342  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:25.734684  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.734994  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.735020  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.735193  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735677  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735839  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735930  146109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 13:21:25.735991  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.736098  146109 ssh_runner.go:195] Run: cat /version.json
	I1028 13:21:25.736123  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.738890  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739070  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739322  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.739356  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739451  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.739483  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.739513  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739605  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.739701  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.739778  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.739841  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.739906  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.739955  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.740103  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.811976  146109 ssh_runner.go:195] Run: systemctl --version
	I1028 13:21:25.836052  146109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 13:21:25.988527  146109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 13:21:25.994625  146109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 13:21:25.994697  146109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 13:21:26.009575  146109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 13:21:26.009608  146109 start.go:495] detecting cgroup driver to use...
	I1028 13:21:26.009692  146109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 13:21:26.027471  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 13:21:26.039855  146109 docker.go:217] disabling cri-docker service (if available) ...
	I1028 13:21:26.039903  146109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 13:21:26.052266  146109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 13:21:26.064513  146109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 13:21:26.179689  146109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 13:21:26.329610  146109 docker.go:233] disabling docker service ...
	I1028 13:21:26.329697  146109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 13:21:26.343046  146109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 13:21:26.354840  146109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 13:21:26.500546  146109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 13:21:26.629347  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 13:21:26.646273  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 13:21:26.664485  146109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 13:21:26.664551  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.675273  146109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 13:21:26.675335  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.685750  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.695352  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.704981  146109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 13:21:26.715492  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.725307  146109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.744718  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.757333  146109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 13:21:26.767238  146109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 13:21:26.767303  146109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 13:21:26.781126  146109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 13:21:26.790731  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:26.930586  146109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 13:21:27.022198  146109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 13:21:27.022271  146109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 13:21:27.027112  146109 start.go:563] Will wait 60s for crictl version
	I1028 13:21:27.027178  146109 ssh_runner.go:195] Run: which crictl
	I1028 13:21:27.031088  146109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 13:21:27.075908  146109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 13:21:27.075987  146109 ssh_runner.go:195] Run: crio --version
	I1028 13:21:27.106900  146109 ssh_runner.go:195] Run: crio --version
	I1028 13:21:27.143942  146109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 13:21:25.073701  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:27.077344  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:27.145246  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:27.148446  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:27.148796  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:27.148830  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:27.149063  146109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 13:21:27.153052  146109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:21:27.167591  146109 kubeadm.go:883] updating cluster {Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 13:21:27.167737  146109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:21:27.167785  146109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:21:27.201486  146109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 13:21:27.201560  146109 ssh_runner.go:195] Run: which lz4
	I1028 13:21:27.205413  146109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 13:21:27.209442  146109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 13:21:27.209475  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 13:21:28.417320  146109 crio.go:462] duration metric: took 1.211930429s to copy over tarball
	I1028 13:21:28.417419  146109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 13:21:30.541806  146109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.12433452s)
	I1028 13:21:30.541849  146109 crio.go:469] duration metric: took 2.124498629s to extract the tarball
	I1028 13:21:30.541861  146109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 13:21:30.580967  146109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:21:30.620580  146109 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 13:21:30.620611  146109 cache_images.go:84] Images are preloaded, skipping loading
	I1028 13:21:30.620622  146109 kubeadm.go:934] updating node { 192.168.39.112 8443 v1.31.2 crio true true} ...
	I1028 13:21:30.620757  146109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-297280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1028 13:21:30.620851  146109 ssh_runner.go:195] Run: crio config
	I1028 13:21:30.668093  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:21:30.668125  146109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 13:21:30.668155  146109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-297280 NodeName:bridge-297280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 13:21:30.668310  146109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-297280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.112"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 13:21:30.668391  146109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 13:21:30.677903  146109 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 13:21:30.677965  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 13:21:30.686535  146109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 13:21:30.701888  146109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 13:21:30.719347  146109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 13:21:30.737627  146109 ssh_runner.go:195] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I1028 13:21:30.741621  146109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:21:30.754300  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:30.881740  146109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:30.899697  146109 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280 for IP: 192.168.39.112
	I1028 13:21:30.899718  146109 certs.go:194] generating shared ca certs ...
	I1028 13:21:30.899734  146109 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.899892  146109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 13:21:30.899932  146109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 13:21:30.899942  146109 certs.go:256] generating profile certs ...
	I1028 13:21:30.899994  146109 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key
	I1028 13:21:30.900007  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt with IP's: []
	I1028 13:21:30.987550  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt ...
	I1028 13:21:30.987586  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: {Name:mk79a6093853f2cde5aa1baf0f2bc6f508cee547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.987783  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key ...
	I1028 13:21:30.987798  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key: {Name:mk09f697959641408c65ca0388fc1d990b962a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.987880  146109 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791
	I1028 13:21:30.987896  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.112]
	I1028 13:21:31.188372  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 ...
	I1028 13:21:31.188403  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791: {Name:mk496324e33762e58876509195201ae7807339d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.188571  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791 ...
	I1028 13:21:31.188587  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791: {Name:mka0264ae6a0338ceffb7420c63e6b9a4b434e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.188665  146109 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt
	I1028 13:21:31.188760  146109 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key
	I1028 13:21:31.188816  146109 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key
	I1028 13:21:31.188831  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt with IP's: []
	I1028 13:21:31.650341  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt ...
	I1028 13:21:31.650377  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt: {Name:mk2f355024f7f8b979d837a3536f0df783524eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.650553  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key ...
	I1028 13:21:31.650563  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key: {Name:mk428d65df121e82dbcfe11b89d556b50be8b966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.650736  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 13:21:31.650774  146109 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 13:21:31.650783  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 13:21:31.650805  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 13:21:31.650831  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 13:21:31.650854  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 13:21:31.650889  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:21:31.651462  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 13:21:31.679781  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 13:21:31.702015  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 13:21:31.732449  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 13:21:31.754292  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 13:21:31.776328  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 13:21:31.798701  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 13:21:31.820197  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 13:21:31.841519  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 13:21:31.863529  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 13:21:31.885112  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 13:21:31.906558  146109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 13:21:31.920959  146109 ssh_runner.go:195] Run: openssl version
	I1028 13:21:31.926007  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 13:21:31.936992  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.941092  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.941146  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.946538  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 13:21:31.956356  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 13:21:31.965919  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.969858  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.969910  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.975148  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 13:21:31.984781  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 13:21:31.994314  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 13:21:31.998078  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 13:21:31.998118  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 13:21:32.003062  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 13:21:32.012656  146109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 13:21:32.016062  146109 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 13:21:32.016122  146109 kubeadm.go:392] StartCluster: {Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:21:32.016222  146109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 13:21:32.016285  146109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:21:32.052814  146109 cri.go:89] found id: ""
	I1028 13:21:32.052890  146109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 13:21:32.062470  146109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:21:32.072842  146109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:21:32.085188  146109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:21:32.085206  146109 kubeadm.go:157] found existing configuration files:
	
	I1028 13:21:32.085243  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 13:21:32.094795  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:21:32.094865  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:21:32.105438  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 13:21:32.114693  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:21:32.114751  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:21:32.125368  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 13:21:32.134360  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:21:32.134425  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:21:32.143533  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 13:21:32.152254  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:21:32.152312  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:21:32.161290  146109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 13:21:32.213514  146109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 13:21:32.213652  146109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:21:32.309172  146109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:21:32.309295  146109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:21:32.309415  146109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 13:21:32.320530  146109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:21:29.574046  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:31.073720  144327 pod_ready.go:93] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.073745  144327 pod_ready.go:82] duration metric: took 12.506663632s for pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.073759  144327 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.077961  144327 pod_ready.go:93] pod "etcd-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.077980  144327 pod_ready.go:82] duration metric: took 4.214913ms for pod "etcd-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.077988  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.082443  144327 pod_ready.go:93] pod "kube-apiserver-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.082460  144327 pod_ready.go:82] duration metric: took 4.466366ms for pod "kube-apiserver-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.082468  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.086562  144327 pod_ready.go:93] pod "kube-controller-manager-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.086589  144327 pod_ready.go:82] duration metric: took 4.113046ms for pod "kube-controller-manager-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.086600  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-w25fl" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.090417  144327 pod_ready.go:93] pod "kube-proxy-w25fl" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.090434  144327 pod_ready.go:82] duration metric: took 3.826364ms for pod "kube-proxy-w25fl" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.090442  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.472296  144327 pod_ready.go:93] pod "kube-scheduler-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.472320  144327 pod_ready.go:82] duration metric: took 381.871698ms for pod "kube-scheduler-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.472330  144327 pod_ready.go:39] duration metric: took 12.913532889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:31.472345  144327 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:21:31.472398  144327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:21:31.489350  144327 api_server.go:72] duration metric: took 21.686135288s to wait for apiserver process to appear ...
	I1028 13:21:31.489385  144327 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:21:31.489412  144327 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1028 13:21:31.493639  144327 api_server.go:279] https://192.168.50.159:8443/healthz returned 200:
	ok
	I1028 13:21:31.494587  144327 api_server.go:141] control plane version: v1.31.2
	I1028 13:21:31.494612  144327 api_server.go:131] duration metric: took 5.218433ms to wait for apiserver health ...
	I1028 13:21:31.494622  144327 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:21:31.674868  144327 system_pods.go:59] 7 kube-system pods found
	I1028 13:21:31.674906  144327 system_pods.go:61] "coredns-7c65d6cfc9-dj9l8" [827e8aa3-3be8-4683-909b-e1ae71a5e4ca] Running
	I1028 13:21:31.674915  144327 system_pods.go:61] "etcd-flannel-297280" [0d814ce7-0894-461e-a6f1-c5aeef16179b] Running
	I1028 13:21:31.674920  144327 system_pods.go:61] "kube-apiserver-flannel-297280" [7c428754-bd41-41f2-807f-6e382c3a9f98] Running
	I1028 13:21:31.674924  144327 system_pods.go:61] "kube-controller-manager-flannel-297280" [0c2bf4a8-3fc9-4540-80f7-914f70794f35] Running
	I1028 13:21:31.674929  144327 system_pods.go:61] "kube-proxy-w25fl" [1d762705-572a-4f70-a6a7-cd2609806ff4] Running
	I1028 13:21:31.674933  144327 system_pods.go:61] "kube-scheduler-flannel-297280" [45bc3533-f30b-4238-a30b-2e219ffc864b] Running
	I1028 13:21:31.674937  144327 system_pods.go:61] "storage-provisioner" [2511defb-d9ca-46b0-a02a-6ddf77363fa2] Running
	I1028 13:21:31.674946  144327 system_pods.go:74] duration metric: took 180.316718ms to wait for pod list to return data ...
	I1028 13:21:31.674957  144327 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:21:31.872025  144327 default_sa.go:45] found service account: "default"
	I1028 13:21:31.872059  144327 default_sa.go:55] duration metric: took 197.092111ms for default service account to be created ...
	I1028 13:21:31.872073  144327 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:21:32.075282  144327 system_pods.go:86] 7 kube-system pods found
	I1028 13:21:32.075308  144327 system_pods.go:89] "coredns-7c65d6cfc9-dj9l8" [827e8aa3-3be8-4683-909b-e1ae71a5e4ca] Running
	I1028 13:21:32.075317  144327 system_pods.go:89] "etcd-flannel-297280" [0d814ce7-0894-461e-a6f1-c5aeef16179b] Running
	I1028 13:21:32.075323  144327 system_pods.go:89] "kube-apiserver-flannel-297280" [7c428754-bd41-41f2-807f-6e382c3a9f98] Running
	I1028 13:21:32.075330  144327 system_pods.go:89] "kube-controller-manager-flannel-297280" [0c2bf4a8-3fc9-4540-80f7-914f70794f35] Running
	I1028 13:21:32.075354  144327 system_pods.go:89] "kube-proxy-w25fl" [1d762705-572a-4f70-a6a7-cd2609806ff4] Running
	I1028 13:21:32.075363  144327 system_pods.go:89] "kube-scheduler-flannel-297280" [45bc3533-f30b-4238-a30b-2e219ffc864b] Running
	I1028 13:21:32.075368  144327 system_pods.go:89] "storage-provisioner" [2511defb-d9ca-46b0-a02a-6ddf77363fa2] Running
	I1028 13:21:32.075379  144327 system_pods.go:126] duration metric: took 203.300304ms to wait for k8s-apps to be running ...
	I1028 13:21:32.075392  144327 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:21:32.075455  144327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:21:32.091407  144327 system_svc.go:56] duration metric: took 16.004372ms WaitForService to wait for kubelet
	I1028 13:21:32.091443  144327 kubeadm.go:582] duration metric: took 22.288245052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:21:32.091471  144327 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:21:32.272052  144327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:21:32.272081  144327 node_conditions.go:123] node cpu capacity is 2
	I1028 13:21:32.272092  144327 node_conditions.go:105] duration metric: took 180.614791ms to run NodePressure ...
	I1028 13:21:32.272105  144327 start.go:241] waiting for startup goroutines ...
	I1028 13:21:32.272111  144327 start.go:246] waiting for cluster config update ...
	I1028 13:21:32.272121  144327 start.go:255] writing updated cluster config ...
	I1028 13:21:32.389776  144327 ssh_runner.go:195] Run: rm -f paused
	I1028 13:21:32.453364  144327 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:21:32.598061  144327 out.go:177] * Done! kubectl is now configured to use "flannel-297280" cluster and "default" namespace by default
	I1028 13:21:32.449773  146109 out.go:235]   - Generating certificates and keys ...
	I1028 13:21:32.449921  146109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:21:32.450013  146109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:21:32.526386  146109 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 13:21:32.701429  146109 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 13:21:32.987102  146109 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 13:21:33.144164  146109 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 13:21:33.634779  146109 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 13:21:33.634959  146109 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-297280 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I1028 13:21:33.787442  146109 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 13:21:33.787554  146109 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-297280 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I1028 13:21:33.889788  146109 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 13:21:33.953480  146109 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 13:21:34.480108  146109 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 13:21:34.480335  146109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:21:34.629223  146109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:21:34.944761  146109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 13:21:35.474045  146109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:21:35.613778  146109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:21:35.851081  146109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:21:35.851760  146109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:21:35.857087  146109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:21:35.858729  146109 out.go:235]   - Booting up control plane ...
	I1028 13:21:35.858855  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:21:35.858973  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:21:35.859145  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:21:35.879880  146109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:21:35.889596  146109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:21:35.889664  146109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:21:36.049895  146109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 13:21:36.050130  146109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 13:21:36.551969  146109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.988197ms
	I1028 13:21:36.552055  146109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 13:21:42.053142  146109 kubeadm.go:310] [api-check] The API server is healthy after 5.502178141s
	I1028 13:21:42.066497  146109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 13:21:42.085850  146109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 13:21:42.158319  146109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 13:21:42.158576  146109 kubeadm.go:310] [mark-control-plane] Marking the node bridge-297280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 13:21:42.191377  146109 kubeadm.go:310] [bootstrap-token] Using token: 90lr9g.qn73b7ozx49ax2he
	I1028 13:21:42.192796  146109 out.go:235]   - Configuring RBAC rules ...
	I1028 13:21:42.192951  146109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 13:21:42.209600  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 13:21:42.229224  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 13:21:42.235513  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 13:21:42.239838  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 13:21:42.249492  146109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 13:21:42.462811  146109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 13:21:43.161813  146109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 13:21:43.462072  146109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 13:21:43.462999  146109 kubeadm.go:310] 
	I1028 13:21:43.463081  146109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 13:21:43.463090  146109 kubeadm.go:310] 
	I1028 13:21:43.463187  146109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 13:21:43.463193  146109 kubeadm.go:310] 
	I1028 13:21:43.463230  146109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 13:21:43.463300  146109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 13:21:43.463380  146109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 13:21:43.463415  146109 kubeadm.go:310] 
	I1028 13:21:43.463526  146109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 13:21:43.463544  146109 kubeadm.go:310] 
	I1028 13:21:43.463610  146109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 13:21:43.463618  146109 kubeadm.go:310] 
	I1028 13:21:43.463706  146109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 13:21:43.463804  146109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 13:21:43.463887  146109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 13:21:43.463897  146109 kubeadm.go:310] 
	I1028 13:21:43.463985  146109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 13:21:43.464102  146109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 13:21:43.464119  146109 kubeadm.go:310] 
	I1028 13:21:43.464235  146109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 90lr9g.qn73b7ozx49ax2he \
	I1028 13:21:43.464385  146109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 13:21:43.464417  146109 kubeadm.go:310] 	--control-plane 
	I1028 13:21:43.464427  146109 kubeadm.go:310] 
	I1028 13:21:43.464542  146109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 13:21:43.464551  146109 kubeadm.go:310] 
	I1028 13:21:43.464658  146109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 90lr9g.qn73b7ozx49ax2he \
	I1028 13:21:43.464824  146109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 13:21:43.465570  146109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 13:21:43.465604  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:21:43.467222  146109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 13:21:43.468504  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 13:21:43.484876  146109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 13:21:43.511776  146109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:21:43.511924  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:43.511934  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-297280 minikube.k8s.io/updated_at=2024_10_28T13_21_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=bridge-297280 minikube.k8s.io/primary=true
	I1028 13:21:43.546899  146109 ops.go:34] apiserver oom_adj: -16
	I1028 13:21:43.608972  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:44.109937  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:44.609124  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:45.109712  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:45.609088  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:46.109103  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:46.609719  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:47.109301  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:47.183694  146109 kubeadm.go:1113] duration metric: took 3.671834423s to wait for elevateKubeSystemPrivileges
	I1028 13:21:47.183744  146109 kubeadm.go:394] duration metric: took 15.167628801s to StartCluster
	I1028 13:21:47.183770  146109 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:47.183859  146109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:21:47.185196  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:47.185417  146109 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:21:47.185426  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 13:21:47.185489  146109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:21:47.185593  146109 addons.go:69] Setting storage-provisioner=true in profile "bridge-297280"
	I1028 13:21:47.185609  146109 addons.go:69] Setting default-storageclass=true in profile "bridge-297280"
	I1028 13:21:47.185615  146109 config.go:182] Loaded profile config "bridge-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:47.185648  146109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-297280"
	I1028 13:21:47.185613  146109 addons.go:234] Setting addon storage-provisioner=true in "bridge-297280"
	I1028 13:21:47.185755  146109 host.go:66] Checking if "bridge-297280" exists ...
	I1028 13:21:47.186106  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.186148  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.186157  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.186184  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.187270  146109 out.go:177] * Verifying Kubernetes components...
	I1028 13:21:47.188728  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:47.201939  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I1028 13:21:47.202108  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 13:21:47.202428  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.202582  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.202994  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.203021  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.203088  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.203099  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.203366  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.203430  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.203533  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.204023  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.204054  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.207386  146109 addons.go:234] Setting addon default-storageclass=true in "bridge-297280"
	I1028 13:21:47.207429  146109 host.go:66] Checking if "bridge-297280" exists ...
	I1028 13:21:47.207824  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.207867  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.224857  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I1028 13:21:47.225295  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.225918  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.225947  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.226458  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.226742  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I1028 13:21:47.226871  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.227255  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.227793  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.227817  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.228280  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.228950  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.228970  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:47.228989  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.231130  146109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:21:47.232676  146109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:47.232704  146109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:21:47.232726  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:47.236002  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.236524  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:47.236548  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.236612  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:47.236799  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:47.236967  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:47.237083  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:47.247783  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33597
	I1028 13:21:47.248301  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.248830  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.248855  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.249221  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.249435  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.250978  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:47.251196  146109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:47.251215  146109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:21:47.251234  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:47.253593  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.254041  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:47.254057  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.254240  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:47.254488  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:47.254652  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:47.254773  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:47.387303  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 13:21:47.392573  146109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:47.521581  146109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:47.523934  146109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:47.868498  146109 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 13:21:47.868675  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.868706  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.869003  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.869022  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.869037  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.869044  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.869326  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.869342  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.869910  146109 node_ready.go:35] waiting up to 15m0s for node "bridge-297280" to be "Ready" ...
	I1028 13:21:47.902383  146109 node_ready.go:49] node "bridge-297280" has status "Ready":"True"
	I1028 13:21:47.902408  146109 node_ready.go:38] duration metric: took 32.473404ms for node "bridge-297280" to be "Ready" ...
	I1028 13:21:47.902419  146109 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:47.927652  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.927685  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.927960  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.927979  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.927983  146109 main.go:141] libmachine: (bridge-297280) DBG | Closing plugin on server side
	I1028 13:21:47.931924  146109 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:48.254058  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:48.254097  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:48.254462  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:48.254490  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:48.254500  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:48.254513  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:48.254872  146109 main.go:141] libmachine: (bridge-297280) DBG | Closing plugin on server side
	I1028 13:21:48.254976  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:48.254996  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:48.256591  146109 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 13:21:48.257768  146109 addons.go:510] duration metric: took 1.072279574s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 13:21:48.374965  146109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-297280" context rescaled to 1 replicas
	I1028 13:21:49.942863  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:52.437501  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:54.438249  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:56.439499  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:58.941152  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:01.438688  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:03.438742  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:05.937699  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:07.938403  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:10.438349  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:12.438734  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:14.938257  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:16.938345  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:18.938663  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:21.437429  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:23.437471  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:25.438094  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:27.937981  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:29.938183  146109 pod_ready.go:93] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.938210  146109 pod_ready.go:82] duration metric: took 42.006254407s for pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.938223  146109 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.939711  146109 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vg67t" not found
	I1028 13:22:29.939738  146109 pod_ready.go:82] duration metric: took 1.50706ms for pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace to be "Ready" ...
	E1028 13:22:29.939750  146109 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vg67t" not found
	I1028 13:22:29.939760  146109 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.943344  146109 pod_ready.go:93] pod "etcd-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.943366  146109 pod_ready.go:82] duration metric: took 3.598317ms for pod "etcd-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.943378  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.947006  146109 pod_ready.go:93] pod "kube-apiserver-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.947024  146109 pod_ready.go:82] duration metric: took 3.639746ms for pod "kube-apiserver-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.947032  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.951162  146109 pod_ready.go:93] pod "kube-controller-manager-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.951177  146109 pod_ready.go:82] duration metric: took 4.13895ms for pod "kube-controller-manager-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.951186  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b5p9h" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.135866  146109 pod_ready.go:93] pod "kube-proxy-b5p9h" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:30.135893  146109 pod_ready.go:82] duration metric: took 184.69985ms for pod "kube-proxy-b5p9h" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.135905  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.535566  146109 pod_ready.go:93] pod "kube-scheduler-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:30.535596  146109 pod_ready.go:82] duration metric: took 399.681902ms for pod "kube-scheduler-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.535606  146109 pod_ready.go:39] duration metric: took 42.633175047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:22:30.535640  146109 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:22:30.535704  146109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:22:30.551176  146109 api_server.go:72] duration metric: took 43.365726528s to wait for apiserver process to appear ...
	I1028 13:22:30.551199  146109 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:22:30.551217  146109 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I1028 13:22:30.555201  146109 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I1028 13:22:30.556230  146109 api_server.go:141] control plane version: v1.31.2
	I1028 13:22:30.556252  146109 api_server.go:131] duration metric: took 5.046545ms to wait for apiserver health ...
	I1028 13:22:30.556259  146109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:22:30.737626  146109 system_pods.go:59] 7 kube-system pods found
	I1028 13:22:30.737657  146109 system_pods.go:61] "coredns-7c65d6cfc9-sv82x" [22d9237d-92c8-4542-b976-af11fa5afab7] Running
	I1028 13:22:30.737662  146109 system_pods.go:61] "etcd-bridge-297280" [23d6720b-8493-48cd-a204-ed22e0c2b9ed] Running
	I1028 13:22:30.737666  146109 system_pods.go:61] "kube-apiserver-bridge-297280" [bb0c5106-3889-4672-aa65-7f2caea88565] Running
	I1028 13:22:30.737669  146109 system_pods.go:61] "kube-controller-manager-bridge-297280" [216e6bd4-d04a-4d9f-b00b-7fbad2734c5e] Running
	I1028 13:22:30.737672  146109 system_pods.go:61] "kube-proxy-b5p9h" [096a84b7-c39f-4fcd-8fc5-f5600efb7c46] Running
	I1028 13:22:30.737675  146109 system_pods.go:61] "kube-scheduler-bridge-297280" [c1ad345e-15ee-4f9e-9119-aef6c9571774] Running
	I1028 13:22:30.737678  146109 system_pods.go:61] "storage-provisioner" [9d198ae2-6ec7-4cd9-98f7-d70cdd12e133] Running
	I1028 13:22:30.737683  146109 system_pods.go:74] duration metric: took 181.419259ms to wait for pod list to return data ...
	I1028 13:22:30.737689  146109 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:22:30.935442  146109 default_sa.go:45] found service account: "default"
	I1028 13:22:30.935468  146109 default_sa.go:55] duration metric: took 197.773341ms for default service account to be created ...
	I1028 13:22:30.935477  146109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:22:31.138478  146109 system_pods.go:86] 7 kube-system pods found
	I1028 13:22:31.138516  146109 system_pods.go:89] "coredns-7c65d6cfc9-sv82x" [22d9237d-92c8-4542-b976-af11fa5afab7] Running
	I1028 13:22:31.138526  146109 system_pods.go:89] "etcd-bridge-297280" [23d6720b-8493-48cd-a204-ed22e0c2b9ed] Running
	I1028 13:22:31.138537  146109 system_pods.go:89] "kube-apiserver-bridge-297280" [bb0c5106-3889-4672-aa65-7f2caea88565] Running
	I1028 13:22:31.138547  146109 system_pods.go:89] "kube-controller-manager-bridge-297280" [216e6bd4-d04a-4d9f-b00b-7fbad2734c5e] Running
	I1028 13:22:31.138555  146109 system_pods.go:89] "kube-proxy-b5p9h" [096a84b7-c39f-4fcd-8fc5-f5600efb7c46] Running
	I1028 13:22:31.138562  146109 system_pods.go:89] "kube-scheduler-bridge-297280" [c1ad345e-15ee-4f9e-9119-aef6c9571774] Running
	I1028 13:22:31.138573  146109 system_pods.go:89] "storage-provisioner" [9d198ae2-6ec7-4cd9-98f7-d70cdd12e133] Running
	I1028 13:22:31.138588  146109 system_pods.go:126] duration metric: took 203.103805ms to wait for k8s-apps to be running ...
	I1028 13:22:31.138601  146109 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:22:31.138667  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:22:31.152706  146109 system_svc.go:56] duration metric: took 14.09764ms WaitForService to wait for kubelet
	I1028 13:22:31.152735  146109 kubeadm.go:582] duration metric: took 43.96729077s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:22:31.152756  146109 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:22:31.336420  146109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:22:31.336451  146109 node_conditions.go:123] node cpu capacity is 2
	I1028 13:22:31.336466  146109 node_conditions.go:105] duration metric: took 183.704965ms to run NodePressure ...
	I1028 13:22:31.336478  146109 start.go:241] waiting for startup goroutines ...
	I1028 13:22:31.336484  146109 start.go:246] waiting for cluster config update ...
	I1028 13:22:31.336494  146109 start.go:255] writing updated cluster config ...
	I1028 13:22:31.336762  146109 ssh_runner.go:195] Run: rm -f paused
	I1028 13:22:31.384224  146109 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:22:31.386367  146109 out.go:177] * Done! kubectl is now configured to use "bridge-297280" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.253269760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122021253249067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecbff7fe-4401-45da-883f-1f1cf4d41698 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.253707828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b21ed5b3-648c-4c61-a83f-f137fbddd86b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.253771420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b21ed5b3-648c-4c61-a83f-f137fbddd86b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.253962165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b21ed5b3-648c-4c61-a83f-f137fbddd86b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.286658875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=565276b1-0d0d-4e8e-ac2c-320678837b01 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.286726060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=565276b1-0d0d-4e8e-ac2c-320678837b01 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.287502615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cf0dc2b-1604-4011-bfff-a5f8c720ae26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.287893056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122021287874223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cf0dc2b-1604-4011-bfff-a5f8c720ae26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.288261379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4cf2604-c2d8-4532-a514-1a13f05b8188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.288306599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4cf2604-c2d8-4532-a514-1a13f05b8188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.288757363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4cf2604-c2d8-4532-a514-1a13f05b8188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.313733963Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f412f917-2e3c-4800-b976-850604ae6796 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.313973494Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x8gvd,Uid:4498824f-7ce1-4167-8701-74cadd3fa83c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121224355353120,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492567745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5f19d0ea-554f-4583-897a-132f6a43d88b,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1730121224351508442,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492562945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f3d0a479730f9c4e335ab9f17c492cdaa4f4472e0fd099cc7503f0923b1f22f,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-rkx62,Uid:31c37fb4-0650-481d-b1e3-4956769450d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121222558112040,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-rkx62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c37fb4-0650-481d-b1e3-4956769450d8,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28
T13:13:36.492561428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&PodSandboxMetadata{Name:kube-proxy-ff797,Uid:ed2dce0b-4dc9-406e-a9c3-f91d75fa0876,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121216807089520,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4dc9-406e-a9c3-f91d75fa0876,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492568841Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:21a53238-251d-4581-b4c3-3a788545ab0c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121216804647081,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-10-28T13:13:36.492566530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-783661,Uid:929ab2ab8af58ab5ea6a58ca1ef52fdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212229956178,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef52fdc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.58:8444,kubernetes.io/config.hash: 929ab2ab8af58ab5ea6a58ca1ef52fdc,kubernetes.io/config.seen: 2024-10-28T13:13:31.482643543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead983
1,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-783661,Uid:a73d04d1b11db2bf4d653e46042d6066,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212130301599,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e46042d6066,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.58:2379,kubernetes.io/config.hash: a73d04d1b11db2bf4d653e46042d6066,kubernetes.io/config.seen: 2024-10-28T13:13:31.500021374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-783661,Uid:20ee62c2966c39846bf64f2c0aebb904,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212119085403,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb904,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 20ee62c2966c39846bf64f2c0aebb904,kubernetes.io/config.seen: 2024-10-28T13:13:31.482649685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-783661,Uid:670be21a8d7463c6cb8c9defbce8fe7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212114641018,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670be21a8d7463c6cb8c9defbce8fe7a,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 670be21a8d7463c6cb8c9defbce8fe7a,kubernetes.io/config.seen: 2024-10-28T13:13:31.482648277Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f412f917-2e3c-4800-b976-850604ae6796 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.314843217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59a570e7-202a-4cd0-aa29-4d5166c03f0e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.314906280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59a570e7-202a-4cd0-aa29-4d5166c03f0e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.315120687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59a570e7-202a-4cd0-aa29-4d5166c03f0e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.318286992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e90a1ee-9fbf-441f-8ead-b7c024130b06 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.318346693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e90a1ee-9fbf-441f-8ead-b7c024130b06 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.319275987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d4d6131-9def-4ca1-a6c9-d2098cbc9926 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.320063125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122021320042152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d4d6131-9def-4ca1-a6c9-d2098cbc9926 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.320536358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ebbf3cc-9142-4d84-aa83-dab010e7180a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.320584901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ebbf3cc-9142-4d84-aa83-dab010e7180a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.320763272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ebbf3cc-9142-4d84-aa83-dab010e7180a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.330886175Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=8d8d03ff-1acc-4b0c-8dfa-a81b39629cde name=/runtime.v1.RuntimeService/Version
	Oct 28 13:27:01 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:27:01.330959138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d8d03ff-1acc-4b0c-8dfa-a81b39629cde name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	390339ebf1058       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a2a1648969bb0       storage-provisioner
	d3e09c1839e3c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e17779f35a09f       busybox
	6c37109c5ef48       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   f24eeae2d252a       coredns-7c65d6cfc9-x8gvd
	b44db812a04c7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   773d59b76c20b       kube-proxy-ff797
	dd70cdc4a6892       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a2a1648969bb0       storage-provisioner
	018b66943fe6d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   e9e8e12d510d9       kube-controller-manager-default-k8s-diff-port-783661
	7b0b68df1e367       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   6973b279778b0       etcd-default-k8s-diff-port-783661
	11560f139fa76       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   524963d9655b6       kube-scheduler-default-k8s-diff-port-783661
	c647572f5e66a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   91ea92ff3b0d2       kube-apiserver-default-k8s-diff-port-783661
	
	
	==> coredns [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33796 - 46628 "HINFO IN 814899742147327372.6374675471951593904. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021446442s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-783661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-783661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=default-k8s-diff-port-783661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T13_05_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 13:05:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-783661
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:24:18 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:24:18 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:24:18 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:24:18 +0000   Mon, 28 Oct 2024 13:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.58
	  Hostname:    default-k8s-diff-port-783661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a3be741ed1c443d8f675efe86426771
	  System UUID:                3a3be741-ed1c-443d-8f67-5efe86426771
	  Boot ID:                    3e8c7c00-e5c0-4d8d-9c4e-6a33116d1720
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-x8gvd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-783661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-783661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-783661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-ff797                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-783661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-rkx62                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-783661 event: Registered Node default-k8s-diff-port-783661 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-783661 event: Registered Node default-k8s-diff-port-783661 in Controller
	
	
	==> dmesg <==
	[Oct28 13:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051019] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.859497] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.512631] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.581939] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.060380] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053667] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.184694] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.109678] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.241396] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +3.836319] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +2.365343] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.061629] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500880] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.411592] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[  +3.314668] kauditd_printk_skb: 64 callbacks suppressed
	[Oct28 13:14] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a] <==
	{"level":"warn","ts":"2024-10-28T13:20:24.656675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.778119ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:24.656803Z","caller":"traceutil/trace.go:171","msg":"trace[958523092] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:902; }","duration":"198.978313ms","start":"2024-10-28T13:20:24.457812Z","end":"2024-10-28T13:20:24.656790Z","steps":["trace[958523092] 'range keys from in-memory index tree'  (duration: 198.762362ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:24.657499Z","caller":"traceutil/trace.go:171","msg":"trace[587703939] transaction","detail":"{read_only:false; response_revision:903; number_of_response:1; }","duration":"120.716077ms","start":"2024-10-28T13:20:24.536763Z","end":"2024-10-28T13:20:24.657479Z","steps":["trace[587703939] 'process raft request'  (duration: 97.569434ms)","trace[587703939] 'compare'  (duration: 22.177497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:20:24.904546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.199273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:24.904650Z","caller":"traceutil/trace.go:171","msg":"trace[2124657403] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:903; }","duration":"120.313427ms","start":"2024-10-28T13:20:24.784307Z","end":"2024-10-28T13:20:24.904621Z","steps":["trace[2124657403] 'count revisions from in-memory index tree'  (duration: 120.144062ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:25.352600Z","caller":"traceutil/trace.go:171","msg":"trace[1287040873] linearizableReadLoop","detail":"{readStateIndex:1019; appliedIndex:1018; }","duration":"192.021158ms","start":"2024-10-28T13:20:25.160562Z","end":"2024-10-28T13:20:25.352583Z","steps":["trace[1287040873] 'read index received'  (duration: 191.753795ms)","trace[1287040873] 'applied index is now lower than readState.Index'  (duration: 266.694µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T13:20:25.352793Z","caller":"traceutil/trace.go:171","msg":"trace[1749087699] transaction","detail":"{read_only:false; response_revision:904; number_of_response:1; }","duration":"217.228035ms","start":"2024-10-28T13:20:25.135552Z","end":"2024-10-28T13:20:25.352780Z","steps":["trace[1749087699] 'process raft request'  (duration: 216.884394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:25.352957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.170524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:20:25.353028Z","caller":"traceutil/trace.go:171","msg":"trace[1172242753] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:904; }","duration":"107.25385ms","start":"2024-10-28T13:20:25.245761Z","end":"2024-10-28T13:20:25.353015Z","steps":["trace[1172242753] 'agreement among raft nodes before linearized reading'  (duration: 107.136041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:25.353185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.616206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:25.353224Z","caller":"traceutil/trace.go:171","msg":"trace[1896114426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:904; }","duration":"192.660684ms","start":"2024-10-28T13:20:25.160557Z","end":"2024-10-28T13:20:25.353218Z","steps":["trace[1896114426] 'agreement among raft nodes before linearized reading'  (duration: 192.602093ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:55.817983Z","caller":"traceutil/trace.go:171","msg":"trace[327840434] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"239.85397ms","start":"2024-10-28T13:20:55.578109Z","end":"2024-10-28T13:20:55.817963Z","steps":["trace[327840434] 'process raft request'  (duration: 239.61347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:56.272847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.265625ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451593 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" mod_revision:919 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T13:20:56.273035Z","caller":"traceutil/trace.go:171","msg":"trace[1696391543] linearizableReadLoop","detail":"{readStateIndex:1049; appliedIndex:1048; }","duration":"112.070509ms","start":"2024-10-28T13:20:56.160953Z","end":"2024-10-28T13:20:56.273023Z","steps":["trace[1696391543] 'read index received'  (duration: 25.482µs)","trace[1696391543] 'applied index is now lower than readState.Index'  (duration: 112.038146ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:20:56.273133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.175713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:56.273170Z","caller":"traceutil/trace.go:171","msg":"trace[626551964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:928; }","duration":"112.216951ms","start":"2024-10-28T13:20:56.160948Z","end":"2024-10-28T13:20:56.273165Z","steps":["trace[626551964] 'agreement among raft nodes before linearized reading'  (duration: 112.128712ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:56.273413Z","caller":"traceutil/trace.go:171","msg":"trace[1575647690] transaction","detail":"{read_only:false; response_revision:928; number_of_response:1; }","duration":"631.648438ms","start":"2024-10-28T13:20:55.641756Z","end":"2024-10-28T13:20:56.273404Z","steps":["trace[1575647690] 'process raft request'  (duration: 449.741748ms)","trace[1575647690] 'compare'  (duration: 181.070844ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:20:56.273504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:20:55.641717Z","time spent":"631.746293ms","remote":"127.0.0.1:47792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" mod_revision:919 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" > >"}
	{"level":"warn","ts":"2024-10-28T13:20:56.840713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.114275ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:56.840776Z","caller":"traceutil/trace.go:171","msg":"trace[2028860530] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:928; }","duration":"382.223155ms","start":"2024-10-28T13:20:56.458540Z","end":"2024-10-28T13:20:56.840763Z","steps":["trace[2028860530] 'range keys from in-memory index tree'  (duration: 382.101462ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:21:32.968775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.976907ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451810 > lease_revoke:<id:46c992d342a53704>","response":"size:28"}
	{"level":"warn","ts":"2024-10-28T13:21:42.977847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.435309ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451870 > lease_revoke:<id:46c992d342a53741>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T13:23:34.958335Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":812}
	{"level":"info","ts":"2024-10-28T13:23:34.967136Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":812,"took":"8.191531ms","hash":2313614649,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2621440,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-28T13:23:34.967222Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2313614649,"revision":812,"compact-revision":-1}
	
	
	==> kernel <==
	 13:27:01 up 13 min,  0 users,  load average: 0.13, 0.12, 0.08
	Linux default-k8s-diff-port-783661 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc] <==
	W1028 13:23:37.126728       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:23:37.126783       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:23:37.127907       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:23:37.127952       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:24:37.129064       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:24:37.129276       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 13:24:37.129348       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:24:37.129478       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:24:37.130581       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:24:37.130623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:26:37.131696       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:26:37.131956       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 13:26:37.131898       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:26:37.132098       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:26:37.133237       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:26:37.133293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835] <==
	E1028 13:21:39.635810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:21:40.172663       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:22:09.643475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:22:10.180188       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:22:39.649136       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:22:40.186153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:23:09.654990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:23:10.192780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:23:39.661236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:23:40.200101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:24:09.666787       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:24:10.206727       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:24:18.199164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-783661"
	I1028 13:24:36.564272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="345.599µs"
	E1028 13:24:39.673118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:24:40.213210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:24:49.562138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="129.029µs"
	E1028 13:25:09.678628       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:25:10.220054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:25:39.683823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:25:40.226822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:26:09.689607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:26:10.233517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:26:39.695897       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:26:40.240744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 13:13:37.234447       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 13:13:37.245613       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.58"]
	E1028 13:13:37.245689       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 13:13:37.298486       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 13:13:37.298539       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 13:13:37.298570       1 server_linux.go:169] "Using iptables Proxier"
	I1028 13:13:37.300568       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 13:13:37.300783       1 server.go:483] "Version info" version="v1.31.2"
	I1028 13:13:37.300826       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:13:37.302437       1 config.go:199] "Starting service config controller"
	I1028 13:13:37.302472       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 13:13:37.302506       1 config.go:105] "Starting endpoint slice config controller"
	I1028 13:13:37.302510       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 13:13:37.303140       1 config.go:328] "Starting node config controller"
	I1028 13:13:37.303168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 13:13:37.402592       1 shared_informer.go:320] Caches are synced for service config
	I1028 13:13:37.402613       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 13:13:37.403224       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f] <==
	I1028 13:13:33.805249       1 serving.go:386] Generated self-signed cert in-memory
	W1028 13:13:36.091805       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 13:13:36.093815       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 13:13:36.094299       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 13:13:36.094393       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 13:13:36.138238       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 13:13:36.138275       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:13:36.140348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 13:13:36.140521       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 13:13:36.140589       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 13:13:36.140666       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 13:13:36.241480       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:25:51 default-k8s-diff-port-783661 kubelet[911]: E1028 13:25:51.705042     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121951704658926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:25:59 default-k8s-diff-port-783661 kubelet[911]: E1028 13:25:59.547778     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:26:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:01.706926     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121961706694331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:01.707229     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121961706694331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:11 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:11.551028     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:26:11 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:11.709021     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121971708409980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:11 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:11.709659     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121971708409980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:21 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:21.711048     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121981710786850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:21 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:21.711102     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121981710786850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:25 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:25.547976     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:31.560563     911 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:31.713288     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121991713077339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:31.713309     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730121991713077339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:39 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:39.547738     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:26:41 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:41.718127     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122001716121284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:41 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:41.718166     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122001716121284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:51 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:51.720026     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122011719661795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:51 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:51.720063     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122011719661795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:26:52 default-k8s-diff-port-783661 kubelet[911]: E1028 13:26:52.548045     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:27:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:27:01.722493     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122021722023563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:27:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:27:01.722516     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122021722023563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053] <==
	I1028 13:14:07.809565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 13:14:07.821054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 13:14:07.821126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 13:14:07.831346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 13:14:07.831570       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927!
	I1028 13:14:07.838625       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"861f9f50-5b3b-41e4-b1fc-a29ba85cf992", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927 became leader
	I1028 13:14:07.932748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927!
	
	
	==> storage-provisioner [dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d] <==
	I1028 13:13:36.994915       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 13:14:06.997578       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rkx62
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62: exit status 1 (57.820795ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rkx62" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (541.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 13:27:13.449381   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:13.838142   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:29.444381   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.800821   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.807187   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.818561   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.839876   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.881225   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:31.962835   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:32.124386   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:32.446578   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:33.088844   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:34.370722   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:36.932392   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:37.534091   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:42.054665   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:52.296691   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:27:54.800259   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:28:12.778919   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:28:15.787243   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:28:53.740797   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:28:57.319875   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:28:59.455640   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:00.645484   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:03.450433   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:16.722692   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:19.422122   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:20.376067   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:25.023786   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:28.346834   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:29:45.585177   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:30:09.047550   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:30:13.286515   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:30:15.662490   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:30:31.928228   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:30:59.629053   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:31:15.596262   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:31:32.861899   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:31:43.297694   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:32:00.564695   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:32:13.449166   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:32:31.801466   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:32:59.504199   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:33:57.318902   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:34:00.646496   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:34:19.422377   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:34:20.376522   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:34:45.584268   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 13:34:46.859985347 +0000 UTC m=+7068.425556786
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-783661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.37µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-783661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-783661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-783661 logs -n 25: (1.088314028s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-297280 sudo iptables                       | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo docker                         | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo cat                            | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo                                | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo find                           | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-297280 sudo crio                           | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-297280                                     | bridge-297280 | jenkins | v1.34.0 | 28 Oct 24 13:22 UTC | 28 Oct 24 13:22 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 13:20:59
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 13:20:59.730326  146109 out.go:345] Setting OutFile to fd 1 ...
	I1028 13:20:59.730428  146109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:20:59.730440  146109 out.go:358] Setting ErrFile to fd 2...
	I1028 13:20:59.730446  146109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 13:20:59.730641  146109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 13:20:59.731248  146109 out.go:352] Setting JSON to false
	I1028 13:20:59.732351  146109 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11010,"bootTime":1730110650,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 13:20:59.732464  146109 start.go:139] virtualization: kvm guest
	I1028 13:20:59.734383  146109 out.go:177] * [bridge-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 13:20:59.736004  146109 notify.go:220] Checking for updates...
	I1028 13:20:59.736029  146109 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 13:20:59.737281  146109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 13:20:59.738577  146109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:20:59.740045  146109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:20:59.741394  146109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 13:20:59.742734  146109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 13:20:59.744632  146109 config.go:182] Loaded profile config "default-k8s-diff-port-783661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.744786  146109 config.go:182] Loaded profile config "enable-default-cni-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.744911  146109 config.go:182] Loaded profile config "flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:20:59.745017  146109 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 13:20:59.784227  146109 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 13:20:59.785566  146109 start.go:297] selected driver: kvm2
	I1028 13:20:59.785586  146109 start.go:901] validating driver "kvm2" against <nil>
	I1028 13:20:59.785601  146109 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 13:20:59.786595  146109 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:20:59.786700  146109 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 13:20:59.802632  146109 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 13:20:59.802699  146109 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 13:20:59.803057  146109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:20:59.803107  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:20:59.803115  146109 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 13:20:59.803185  146109 start.go:340] cluster config:
	{Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:20:59.803353  146109 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 13:20:59.805029  146109 out.go:177] * Starting "bridge-297280" primary control-plane node in "bridge-297280" cluster
	I1028 13:20:59.806163  146109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:20:59.806220  146109 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 13:20:59.806234  146109 cache.go:56] Caching tarball of preloaded images
	I1028 13:20:59.806342  146109 preload.go:172] Found /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 13:20:59.806357  146109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 13:20:59.806493  146109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json ...
	I1028 13:20:59.806522  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json: {Name:mkf499151a7940cb7d6b517784be2ec3ae5a19ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:20:59.806718  146109 start.go:360] acquireMachinesLock for bridge-297280: {Name:mk3bffd01e1b77203f4a9bec69bad605167273db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 13:20:59.806773  146109 start.go:364] duration metric: took 32.091µs to acquireMachinesLock for "bridge-297280"
	I1028 13:20:59.806799  146109 start.go:93] Provisioning new machine with config: &{Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:20:59.806896  146109 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 13:21:01.158893  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:03.162269  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:20:59.809646  146109 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1028 13:20:59.809832  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:20:59.809897  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:20:59.826229  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I1028 13:20:59.826822  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:20:59.827504  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:20:59.827533  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:20:59.827948  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:20:59.828171  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:20:59.828354  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:20:59.828544  146109 start.go:159] libmachine.API.Create for "bridge-297280" (driver="kvm2")
	I1028 13:20:59.828578  146109 client.go:168] LocalClient.Create starting
	I1028 13:20:59.828618  146109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem
	I1028 13:20:59.828661  146109 main.go:141] libmachine: Decoding PEM data...
	I1028 13:20:59.828694  146109 main.go:141] libmachine: Parsing certificate...
	I1028 13:20:59.828758  146109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem
	I1028 13:20:59.828786  146109 main.go:141] libmachine: Decoding PEM data...
	I1028 13:20:59.828802  146109 main.go:141] libmachine: Parsing certificate...
	I1028 13:20:59.828841  146109 main.go:141] libmachine: Running pre-create checks...
	I1028 13:20:59.828861  146109 main.go:141] libmachine: (bridge-297280) Calling .PreCreateCheck
	I1028 13:20:59.829331  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:20:59.829797  146109 main.go:141] libmachine: Creating machine...
	I1028 13:20:59.829813  146109 main.go:141] libmachine: (bridge-297280) Calling .Create
	I1028 13:20:59.829970  146109 main.go:141] libmachine: (bridge-297280) Creating KVM machine...
	I1028 13:20:59.831375  146109 main.go:141] libmachine: (bridge-297280) DBG | found existing default KVM network
	I1028 13:20:59.832954  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:20:59.832767  146132 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211820}
	I1028 13:20:59.832975  146109 main.go:141] libmachine: (bridge-297280) DBG | created network xml: 
	I1028 13:20:59.832986  146109 main.go:141] libmachine: (bridge-297280) DBG | <network>
	I1028 13:20:59.832994  146109 main.go:141] libmachine: (bridge-297280) DBG |   <name>mk-bridge-297280</name>
	I1028 13:20:59.833003  146109 main.go:141] libmachine: (bridge-297280) DBG |   <dns enable='no'/>
	I1028 13:20:59.833013  146109 main.go:141] libmachine: (bridge-297280) DBG |   
	I1028 13:20:59.833025  146109 main.go:141] libmachine: (bridge-297280) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 13:20:59.833036  146109 main.go:141] libmachine: (bridge-297280) DBG |     <dhcp>
	I1028 13:20:59.833094  146109 main.go:141] libmachine: (bridge-297280) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 13:20:59.833124  146109 main.go:141] libmachine: (bridge-297280) DBG |     </dhcp>
	I1028 13:20:59.833139  146109 main.go:141] libmachine: (bridge-297280) DBG |   </ip>
	I1028 13:20:59.833148  146109 main.go:141] libmachine: (bridge-297280) DBG |   
	I1028 13:20:59.833156  146109 main.go:141] libmachine: (bridge-297280) DBG | </network>
	I1028 13:20:59.833161  146109 main.go:141] libmachine: (bridge-297280) DBG | 
	I1028 13:20:59.838256  146109 main.go:141] libmachine: (bridge-297280) DBG | trying to create private KVM network mk-bridge-297280 192.168.39.0/24...
	I1028 13:20:59.924254  146109 main.go:141] libmachine: (bridge-297280) DBG | private KVM network mk-bridge-297280 192.168.39.0/24 created
	I1028 13:20:59.924284  146109 main.go:141] libmachine: (bridge-297280) Setting up store path in /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 ...
	I1028 13:20:59.924298  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:20:59.924205  146132 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:20:59.924376  146109 main.go:141] libmachine: (bridge-297280) Building disk image from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 13:20:59.924413  146109 main.go:141] libmachine: (bridge-297280) Downloading /home/jenkins/minikube-integration/19875-77800/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 13:21:00.208472  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.208340  146132 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa...
	I1028 13:21:00.328153  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.327989  146132 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/bridge-297280.rawdisk...
	I1028 13:21:00.328184  146109 main.go:141] libmachine: (bridge-297280) DBG | Writing magic tar header
	I1028 13:21:00.328197  146109 main.go:141] libmachine: (bridge-297280) DBG | Writing SSH key tar header
	I1028 13:21:00.328734  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:00.328492  146132 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 ...
	I1028 13:21:00.329510  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280
	I1028 13:21:00.329553  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube/machines
	I1028 13:21:00.329568  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280 (perms=drwx------)
	I1028 13:21:00.329649  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube/machines (perms=drwxr-xr-x)
	I1028 13:21:00.329665  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 13:21:00.329680  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19875-77800
	I1028 13:21:00.329690  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 13:21:00.329722  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home/jenkins
	I1028 13:21:00.329738  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800/.minikube (perms=drwxr-xr-x)
	I1028 13:21:00.329754  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration/19875-77800 (perms=drwxrwxr-x)
	I1028 13:21:00.329787  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 13:21:00.329799  146109 main.go:141] libmachine: (bridge-297280) DBG | Checking permissions on dir: /home
	I1028 13:21:00.329814  146109 main.go:141] libmachine: (bridge-297280) DBG | Skipping /home - not owner
	I1028 13:21:00.329833  146109 main.go:141] libmachine: (bridge-297280) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 13:21:00.329859  146109 main.go:141] libmachine: (bridge-297280) Creating domain...
	I1028 13:21:00.330951  146109 main.go:141] libmachine: (bridge-297280) define libvirt domain using xml: 
	I1028 13:21:00.330976  146109 main.go:141] libmachine: (bridge-297280) <domain type='kvm'>
	I1028 13:21:00.330996  146109 main.go:141] libmachine: (bridge-297280)   <name>bridge-297280</name>
	I1028 13:21:00.331011  146109 main.go:141] libmachine: (bridge-297280)   <memory unit='MiB'>3072</memory>
	I1028 13:21:00.331023  146109 main.go:141] libmachine: (bridge-297280)   <vcpu>2</vcpu>
	I1028 13:21:00.331036  146109 main.go:141] libmachine: (bridge-297280)   <features>
	I1028 13:21:00.331048  146109 main.go:141] libmachine: (bridge-297280)     <acpi/>
	I1028 13:21:00.331054  146109 main.go:141] libmachine: (bridge-297280)     <apic/>
	I1028 13:21:00.331062  146109 main.go:141] libmachine: (bridge-297280)     <pae/>
	I1028 13:21:00.331068  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331073  146109 main.go:141] libmachine: (bridge-297280)   </features>
	I1028 13:21:00.331077  146109 main.go:141] libmachine: (bridge-297280)   <cpu mode='host-passthrough'>
	I1028 13:21:00.331081  146109 main.go:141] libmachine: (bridge-297280)   
	I1028 13:21:00.331085  146109 main.go:141] libmachine: (bridge-297280)   </cpu>
	I1028 13:21:00.331089  146109 main.go:141] libmachine: (bridge-297280)   <os>
	I1028 13:21:00.331093  146109 main.go:141] libmachine: (bridge-297280)     <type>hvm</type>
	I1028 13:21:00.331098  146109 main.go:141] libmachine: (bridge-297280)     <boot dev='cdrom'/>
	I1028 13:21:00.331102  146109 main.go:141] libmachine: (bridge-297280)     <boot dev='hd'/>
	I1028 13:21:00.331110  146109 main.go:141] libmachine: (bridge-297280)     <bootmenu enable='no'/>
	I1028 13:21:00.331115  146109 main.go:141] libmachine: (bridge-297280)   </os>
	I1028 13:21:00.331123  146109 main.go:141] libmachine: (bridge-297280)   <devices>
	I1028 13:21:00.331133  146109 main.go:141] libmachine: (bridge-297280)     <disk type='file' device='cdrom'>
	I1028 13:21:00.331145  146109 main.go:141] libmachine: (bridge-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/boot2docker.iso'/>
	I1028 13:21:00.331169  146109 main.go:141] libmachine: (bridge-297280)       <target dev='hdc' bus='scsi'/>
	I1028 13:21:00.331180  146109 main.go:141] libmachine: (bridge-297280)       <readonly/>
	I1028 13:21:00.331186  146109 main.go:141] libmachine: (bridge-297280)     </disk>
	I1028 13:21:00.331232  146109 main.go:141] libmachine: (bridge-297280)     <disk type='file' device='disk'>
	I1028 13:21:00.331272  146109 main.go:141] libmachine: (bridge-297280)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 13:21:00.331311  146109 main.go:141] libmachine: (bridge-297280)       <source file='/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/bridge-297280.rawdisk'/>
	I1028 13:21:00.331336  146109 main.go:141] libmachine: (bridge-297280)       <target dev='hda' bus='virtio'/>
	I1028 13:21:00.331349  146109 main.go:141] libmachine: (bridge-297280)     </disk>
	I1028 13:21:00.331360  146109 main.go:141] libmachine: (bridge-297280)     <interface type='network'>
	I1028 13:21:00.331369  146109 main.go:141] libmachine: (bridge-297280)       <source network='mk-bridge-297280'/>
	I1028 13:21:00.331379  146109 main.go:141] libmachine: (bridge-297280)       <model type='virtio'/>
	I1028 13:21:00.331390  146109 main.go:141] libmachine: (bridge-297280)     </interface>
	I1028 13:21:00.331400  146109 main.go:141] libmachine: (bridge-297280)     <interface type='network'>
	I1028 13:21:00.331411  146109 main.go:141] libmachine: (bridge-297280)       <source network='default'/>
	I1028 13:21:00.331421  146109 main.go:141] libmachine: (bridge-297280)       <model type='virtio'/>
	I1028 13:21:00.331430  146109 main.go:141] libmachine: (bridge-297280)     </interface>
	I1028 13:21:00.331445  146109 main.go:141] libmachine: (bridge-297280)     <serial type='pty'>
	I1028 13:21:00.331455  146109 main.go:141] libmachine: (bridge-297280)       <target port='0'/>
	I1028 13:21:00.331462  146109 main.go:141] libmachine: (bridge-297280)     </serial>
	I1028 13:21:00.331472  146109 main.go:141] libmachine: (bridge-297280)     <console type='pty'>
	I1028 13:21:00.331480  146109 main.go:141] libmachine: (bridge-297280)       <target type='serial' port='0'/>
	I1028 13:21:00.331498  146109 main.go:141] libmachine: (bridge-297280)     </console>
	I1028 13:21:00.331513  146109 main.go:141] libmachine: (bridge-297280)     <rng model='virtio'>
	I1028 13:21:00.331526  146109 main.go:141] libmachine: (bridge-297280)       <backend model='random'>/dev/random</backend>
	I1028 13:21:00.331547  146109 main.go:141] libmachine: (bridge-297280)     </rng>
	I1028 13:21:00.331558  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331568  146109 main.go:141] libmachine: (bridge-297280)     
	I1028 13:21:00.331587  146109 main.go:141] libmachine: (bridge-297280)   </devices>
	I1028 13:21:00.331604  146109 main.go:141] libmachine: (bridge-297280) </domain>
	I1028 13:21:00.331647  146109 main.go:141] libmachine: (bridge-297280) 
	I1028 13:21:00.336655  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:9d:94:98 in network default
	I1028 13:21:00.337380  146109 main.go:141] libmachine: (bridge-297280) Ensuring networks are active...
	I1028 13:21:00.337397  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:00.338277  146109 main.go:141] libmachine: (bridge-297280) Ensuring network default is active
	I1028 13:21:00.338623  146109 main.go:141] libmachine: (bridge-297280) Ensuring network mk-bridge-297280 is active
	I1028 13:21:00.339170  146109 main.go:141] libmachine: (bridge-297280) Getting domain xml...
	I1028 13:21:00.339967  146109 main.go:141] libmachine: (bridge-297280) Creating domain...
	I1028 13:21:01.636379  146109 main.go:141] libmachine: (bridge-297280) Waiting to get IP...
	I1028 13:21:01.637495  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:01.638088  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:01.638116  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:01.638063  146132 retry.go:31] will retry after 289.404152ms: waiting for machine to come up
	I1028 13:21:01.929711  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:01.930295  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:01.930322  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:01.930260  146132 retry.go:31] will retry after 278.924935ms: waiting for machine to come up
	I1028 13:21:02.210852  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:02.211341  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:02.211371  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:02.211287  146132 retry.go:31] will retry after 333.293065ms: waiting for machine to come up
	I1028 13:21:02.545917  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:02.546514  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:02.546542  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:02.546454  146132 retry.go:31] will retry after 500.258922ms: waiting for machine to come up
	I1028 13:21:03.047994  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:03.048535  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:03.048568  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:03.048476  146132 retry.go:31] will retry after 538.451624ms: waiting for machine to come up
	I1028 13:21:03.588801  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:03.589368  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:03.589400  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:03.589323  146132 retry.go:31] will retry after 596.904677ms: waiting for machine to come up
	I1028 13:21:04.188066  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:04.188678  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:04.188713  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:04.188606  146132 retry.go:31] will retry after 1.087456635s: waiting for machine to come up
	I1028 13:21:06.135317  144327 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 13:21:06.135411  144327 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:21:06.135531  144327 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:21:06.135699  144327 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:21:06.135878  144327 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 13:21:06.135990  144327 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:21:06.137628  144327 out.go:235]   - Generating certificates and keys ...
	I1028 13:21:06.137735  144327 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:21:06.137850  144327 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:21:06.137963  144327 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 13:21:06.138080  144327 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 13:21:06.138153  144327 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 13:21:06.138210  144327 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 13:21:06.138286  144327 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 13:21:06.138484  144327 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-297280 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1028 13:21:06.138572  144327 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 13:21:06.138705  144327 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-297280 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1028 13:21:06.138791  144327 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 13:21:06.138864  144327 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 13:21:06.138924  144327 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 13:21:06.138996  144327 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:21:06.139070  144327 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:21:06.139145  144327 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 13:21:06.139233  144327 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:21:06.139327  144327 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:21:06.139401  144327 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:21:06.139511  144327 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:21:06.139606  144327 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:21:06.140986  144327 out.go:235]   - Booting up control plane ...
	I1028 13:21:06.141117  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:21:06.141236  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:21:06.141347  144327 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:21:06.141513  144327 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:21:06.141639  144327 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:21:06.141709  144327 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:21:06.141906  144327 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 13:21:06.142043  144327 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 13:21:06.142121  144327 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.346899ms
	I1028 13:21:06.142201  144327 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 13:21:06.142264  144327 kubeadm.go:310] [api-check] The API server is healthy after 5.501976932s
	I1028 13:21:06.142357  144327 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 13:21:06.142463  144327 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 13:21:06.142513  144327 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 13:21:06.142670  144327 kubeadm.go:310] [mark-control-plane] Marking the node flannel-297280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 13:21:06.142722  144327 kubeadm.go:310] [bootstrap-token] Using token: 78vwrn.m8eixtl0knqeesha
	I1028 13:21:06.144121  144327 out.go:235]   - Configuring RBAC rules ...
	I1028 13:21:06.144214  144327 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 13:21:06.144286  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 13:21:06.144482  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 13:21:06.144698  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 13:21:06.144907  144327 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 13:21:06.145039  144327 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 13:21:06.145209  144327 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 13:21:06.145282  144327 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 13:21:06.145349  144327 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 13:21:06.145373  144327 kubeadm.go:310] 
	I1028 13:21:06.145462  144327 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 13:21:06.145474  144327 kubeadm.go:310] 
	I1028 13:21:06.145583  144327 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 13:21:06.145592  144327 kubeadm.go:310] 
	I1028 13:21:06.145632  144327 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 13:21:06.145731  144327 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 13:21:06.145807  144327 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 13:21:06.145819  144327 kubeadm.go:310] 
	I1028 13:21:06.145890  144327 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 13:21:06.145903  144327 kubeadm.go:310] 
	I1028 13:21:06.145978  144327 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 13:21:06.145987  144327 kubeadm.go:310] 
	I1028 13:21:06.146068  144327 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 13:21:06.146167  144327 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 13:21:06.146264  144327 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 13:21:06.146273  144327 kubeadm.go:310] 
	I1028 13:21:06.146356  144327 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 13:21:06.146427  144327 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 13:21:06.146446  144327 kubeadm.go:310] 
	I1028 13:21:06.146550  144327 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 78vwrn.m8eixtl0knqeesha \
	I1028 13:21:06.146676  144327 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 13:21:06.146697  144327 kubeadm.go:310] 	--control-plane 
	I1028 13:21:06.146703  144327 kubeadm.go:310] 
	I1028 13:21:06.146771  144327 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 13:21:06.146777  144327 kubeadm.go:310] 
	I1028 13:21:06.146847  144327 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 78vwrn.m8eixtl0knqeesha \
	I1028 13:21:06.146990  144327 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 13:21:06.147004  144327 cni.go:84] Creating CNI manager for "flannel"
	I1028 13:21:06.148709  144327 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1028 13:21:06.150058  144327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 13:21:06.155609  144327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 13:21:06.155625  144327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1028 13:21:06.173094  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 13:21:06.540077  144327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:21:06.540175  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:06.540175  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-297280 minikube.k8s.io/updated_at=2024_10_28T13_21_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=flannel-297280 minikube.k8s.io/primary=true
	I1028 13:21:06.573924  144327 ops.go:34] apiserver oom_adj: -16
	I1028 13:21:06.688052  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:07.188886  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:07.688772  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:08.188516  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:05.658897  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:08.158613  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:05.277513  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:05.278040  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:05.278069  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:05.277985  146132 retry.go:31] will retry after 905.19327ms: waiting for machine to come up
	I1028 13:21:06.184909  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:06.185361  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:06.185389  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:06.185308  146132 retry.go:31] will retry after 1.852852207s: waiting for machine to come up
	I1028 13:21:08.040431  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:08.041024  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:08.041052  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:08.040971  146132 retry.go:31] will retry after 1.93654077s: waiting for machine to come up
	I1028 13:21:08.688497  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.188956  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.688331  144327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:09.800998  144327 kubeadm.go:1113] duration metric: took 3.260888047s to wait for elevateKubeSystemPrivileges
	I1028 13:21:09.801037  144327 kubeadm.go:394] duration metric: took 14.575440018s to StartCluster
	I1028 13:21:09.801066  144327 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:09.801177  144327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:21:09.802895  144327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:09.803165  144327 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:21:09.803283  144327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 13:21:09.803534  144327 config.go:182] Loaded profile config "flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:09.803585  144327 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:21:09.803689  144327 addons.go:69] Setting storage-provisioner=true in profile "flannel-297280"
	I1028 13:21:09.803697  144327 addons.go:69] Setting default-storageclass=true in profile "flannel-297280"
	I1028 13:21:09.803708  144327 addons.go:234] Setting addon storage-provisioner=true in "flannel-297280"
	I1028 13:21:09.803728  144327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-297280"
	I1028 13:21:09.803739  144327 host.go:66] Checking if "flannel-297280" exists ...
	I1028 13:21:09.804164  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.804203  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.804218  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.804244  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.805686  144327 out.go:177] * Verifying Kubernetes components...
	I1028 13:21:09.810212  144327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:09.822883  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I1028 13:21:09.823367  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.823672  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I1028 13:21:09.824030  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.824089  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.824131  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.824475  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.824789  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.824813  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.825096  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.825142  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.825266  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.825432  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.829481  144327 addons.go:234] Setting addon default-storageclass=true in "flannel-297280"
	I1028 13:21:09.829545  144327 host.go:66] Checking if "flannel-297280" exists ...
	I1028 13:21:09.829946  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.829968  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.843725  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I1028 13:21:09.844248  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.844730  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.844746  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.845086  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.845254  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.847092  144327 main.go:141] libmachine: (flannel-297280) Calling .DriverName
	I1028 13:21:09.848848  144327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:21:09.849992  144327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:09.850014  144327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:21:09.850031  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHHostname
	I1028 13:21:09.853223  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1028 13:21:09.853374  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.853801  144327 main.go:141] libmachine: (flannel-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:99:5f", ip: ""} in network mk-flannel-297280: {Iface:virbr2 ExpiryTime:2024-10-28 14:20:40 +0000 UTC Type:0 Mac:52:54:00:81:99:5f Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:flannel-297280 Clientid:01:52:54:00:81:99:5f}
	I1028 13:21:09.853824  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined IP address 192.168.50.159 and MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.854027  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.854075  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHPort
	I1028 13:21:09.854529  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.854545  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.854548  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHKeyPath
	I1028 13:21:09.854720  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHUsername
	I1028 13:21:09.854858  144327 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/flannel-297280/id_rsa Username:docker}
	I1028 13:21:09.855038  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.855692  144327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:09.855725  144327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:09.871250  144327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1028 13:21:09.871800  144327 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:09.872425  144327 main.go:141] libmachine: Using API Version  1
	I1028 13:21:09.872444  144327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:09.872804  144327 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:09.873039  144327 main.go:141] libmachine: (flannel-297280) Calling .GetState
	I1028 13:21:09.874977  144327 main.go:141] libmachine: (flannel-297280) Calling .DriverName
	I1028 13:21:09.875199  144327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:09.875219  144327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:21:09.875235  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHHostname
	I1028 13:21:09.878037  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.878462  144327 main.go:141] libmachine: (flannel-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:99:5f", ip: ""} in network mk-flannel-297280: {Iface:virbr2 ExpiryTime:2024-10-28 14:20:40 +0000 UTC Type:0 Mac:52:54:00:81:99:5f Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:flannel-297280 Clientid:01:52:54:00:81:99:5f}
	I1028 13:21:09.878490  144327 main.go:141] libmachine: (flannel-297280) DBG | domain flannel-297280 has defined IP address 192.168.50.159 and MAC address 52:54:00:81:99:5f in network mk-flannel-297280
	I1028 13:21:09.878631  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHPort
	I1028 13:21:09.878786  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHKeyPath
	I1028 13:21:09.878885  144327 main.go:141] libmachine: (flannel-297280) Calling .GetSSHUsername
	I1028 13:21:09.878970  144327 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/flannel-297280/id_rsa Username:docker}
	I1028 13:21:10.029174  144327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:10.029271  144327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 13:21:10.055539  144327 node_ready.go:35] waiting up to 15m0s for node "flannel-297280" to be "Ready" ...
	I1028 13:21:10.216361  144327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:10.250575  144327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:10.590461  144327 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1028 13:21:11.009084  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009117  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009134  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009147  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009425  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009450  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009460  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009466  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009563  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.009593  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009612  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009631  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.009642  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.009799  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.009833  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009849  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009879  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.009898  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.009913  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.021965  144327 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:11.021986  144327 main.go:141] libmachine: (flannel-297280) Calling .Close
	I1028 13:21:11.022261  144327 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:11.022280  144327 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:11.022280  144327 main.go:141] libmachine: (flannel-297280) DBG | Closing plugin on server side
	I1028 13:21:11.024652  144327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 13:21:11.025756  144327 addons.go:510] duration metric: took 1.222167964s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 13:21:11.096842  144327 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-297280" context rescaled to 1 replicas
	I1028 13:21:12.059123  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:10.160403  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:12.657934  142406 pod_ready.go:103] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:13.658140  142406 pod_ready.go:93] pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.658165  142406 pod_ready.go:82] duration metric: took 32.506180506s for pod "coredns-7c65d6cfc9-jdq8d" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.658179  142406 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.660114  142406 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s8gk8" not found
	I1028 13:21:13.660142  142406 pod_ready.go:82] duration metric: took 1.955168ms for pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace to be "Ready" ...
	E1028 13:21:13.660154  142406 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-s8gk8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s8gk8" not found
	I1028 13:21:13.660163  142406 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.665007  142406 pod_ready.go:93] pod "etcd-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.665032  142406 pod_ready.go:82] duration metric: took 4.858691ms for pod "etcd-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.665043  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.670487  142406 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.670508  142406 pod_ready.go:82] duration metric: took 5.45898ms for pod "kube-apiserver-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.670517  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.675354  142406 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.675377  142406 pod_ready.go:82] duration metric: took 4.853628ms for pod "kube-controller-manager-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.675389  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7dg4r" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.855393  142406 pod_ready.go:93] pod "kube-proxy-7dg4r" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:13.855417  142406 pod_ready.go:82] duration metric: took 180.02029ms for pod "kube-proxy-7dg4r" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:13.855428  142406 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:09.978929  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:09.979569  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:09.979603  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:09.979528  146132 retry.go:31] will retry after 2.517726332s: waiting for machine to come up
	I1028 13:21:12.499175  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:12.499651  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:12.499681  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:12.499584  146132 retry.go:31] will retry after 3.287997939s: waiting for machine to come up
	I1028 13:21:14.255590  142406 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:14.255622  142406 pod_ready.go:82] duration metric: took 400.186438ms for pod "kube-scheduler-enable-default-cni-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:14.255650  142406 pod_ready.go:39] duration metric: took 33.116717205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:14.255671  142406 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:21:14.255732  142406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:21:14.271555  142406 api_server.go:72] duration metric: took 34.034379367s to wait for apiserver process to appear ...
	I1028 13:21:14.271577  142406 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:21:14.271596  142406 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I1028 13:21:14.275775  142406 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I1028 13:21:14.276809  142406 api_server.go:141] control plane version: v1.31.2
	I1028 13:21:14.276829  142406 api_server.go:131] duration metric: took 5.245547ms to wait for apiserver health ...
	I1028 13:21:14.276838  142406 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:21:14.458435  142406 system_pods.go:59] 7 kube-system pods found
	I1028 13:21:14.458464  142406 system_pods.go:61] "coredns-7c65d6cfc9-jdq8d" [c8370b8b-04a0-4e84-b64b-08c166f3fc3b] Running
	I1028 13:21:14.458469  142406 system_pods.go:61] "etcd-enable-default-cni-297280" [0aee5b6e-8399-4fc4-ac09-e43f0ae2f755] Running
	I1028 13:21:14.458473  142406 system_pods.go:61] "kube-apiserver-enable-default-cni-297280" [732d43de-3ced-43d0-baa1-9bfcb2ebc808] Running
	I1028 13:21:14.458476  142406 system_pods.go:61] "kube-controller-manager-enable-default-cni-297280" [9e81877e-0de0-448b-9a73-ed546c6c7640] Running
	I1028 13:21:14.458479  142406 system_pods.go:61] "kube-proxy-7dg4r" [6743c3c5-5403-4ec7-b862-6dfb58bd7c39] Running
	I1028 13:21:14.458483  142406 system_pods.go:61] "kube-scheduler-enable-default-cni-297280" [5629459b-6e6a-45fa-8e01-db534d84bf0a] Running
	I1028 13:21:14.458486  142406 system_pods.go:61] "storage-provisioner" [939c3647-0f0f-4fc4-ab85-2abb6c2c2256] Running
	I1028 13:21:14.458497  142406 system_pods.go:74] duration metric: took 181.647136ms to wait for pod list to return data ...
	I1028 13:21:14.458507  142406 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:21:14.655550  142406 default_sa.go:45] found service account: "default"
	I1028 13:21:14.655578  142406 default_sa.go:55] duration metric: took 197.064132ms for default service account to be created ...
	I1028 13:21:14.655592  142406 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:21:14.857890  142406 system_pods.go:86] 7 kube-system pods found
	I1028 13:21:14.857918  142406 system_pods.go:89] "coredns-7c65d6cfc9-jdq8d" [c8370b8b-04a0-4e84-b64b-08c166f3fc3b] Running
	I1028 13:21:14.857923  142406 system_pods.go:89] "etcd-enable-default-cni-297280" [0aee5b6e-8399-4fc4-ac09-e43f0ae2f755] Running
	I1028 13:21:14.857927  142406 system_pods.go:89] "kube-apiserver-enable-default-cni-297280" [732d43de-3ced-43d0-baa1-9bfcb2ebc808] Running
	I1028 13:21:14.857931  142406 system_pods.go:89] "kube-controller-manager-enable-default-cni-297280" [9e81877e-0de0-448b-9a73-ed546c6c7640] Running
	I1028 13:21:14.857934  142406 system_pods.go:89] "kube-proxy-7dg4r" [6743c3c5-5403-4ec7-b862-6dfb58bd7c39] Running
	I1028 13:21:14.857938  142406 system_pods.go:89] "kube-scheduler-enable-default-cni-297280" [5629459b-6e6a-45fa-8e01-db534d84bf0a] Running
	I1028 13:21:14.857941  142406 system_pods.go:89] "storage-provisioner" [939c3647-0f0f-4fc4-ab85-2abb6c2c2256] Running
	I1028 13:21:14.857948  142406 system_pods.go:126] duration metric: took 202.349362ms to wait for k8s-apps to be running ...
	I1028 13:21:14.857961  142406 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:21:14.858012  142406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:21:14.872839  142406 system_svc.go:56] duration metric: took 14.873794ms WaitForService to wait for kubelet
	I1028 13:21:14.872868  142406 kubeadm.go:582] duration metric: took 34.635694617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:21:14.872894  142406 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:21:15.056821  142406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:21:15.056850  142406 node_conditions.go:123] node cpu capacity is 2
	I1028 13:21:15.056864  142406 node_conditions.go:105] duration metric: took 183.963126ms to run NodePressure ...
	I1028 13:21:15.056879  142406 start.go:241] waiting for startup goroutines ...
	I1028 13:21:15.056888  142406 start.go:246] waiting for cluster config update ...
	I1028 13:21:15.056902  142406 start.go:255] writing updated cluster config ...
	I1028 13:21:15.057178  142406 ssh_runner.go:195] Run: rm -f paused
	I1028 13:21:15.106793  142406 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:21:15.108998  142406 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-297280" cluster and "default" namespace by default
	I1028 13:21:14.559385  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:17.061958  144327 node_ready.go:53] node "flannel-297280" has status "Ready":"False"
	I1028 13:21:15.788817  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:15.789337  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:15.789369  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:15.789264  146132 retry.go:31] will retry after 3.901879397s: waiting for machine to come up
	I1028 13:21:19.693541  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:19.694044  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find current IP address of domain bridge-297280 in network mk-bridge-297280
	I1028 13:21:19.694068  146109 main.go:141] libmachine: (bridge-297280) DBG | I1028 13:21:19.693987  146132 retry.go:31] will retry after 4.556264872s: waiting for machine to come up
	I1028 13:21:18.558736  144327 node_ready.go:49] node "flannel-297280" has status "Ready":"True"
	I1028 13:21:18.558768  144327 node_ready.go:38] duration metric: took 8.503177167s for node "flannel-297280" to be "Ready" ...
	I1028 13:21:18.558782  144327 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:18.567049  144327 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:20.574718  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:23.073114  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:24.253019  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.253479  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has current primary IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.253497  146109 main.go:141] libmachine: (bridge-297280) Found IP for machine: 192.168.39.112
	I1028 13:21:24.253513  146109 main.go:141] libmachine: (bridge-297280) Reserving static IP address...
	I1028 13:21:24.253928  146109 main.go:141] libmachine: (bridge-297280) DBG | unable to find host DHCP lease matching {name: "bridge-297280", mac: "52:54:00:d9:5d:00", ip: "192.168.39.112"} in network mk-bridge-297280
	I1028 13:21:24.329131  146109 main.go:141] libmachine: (bridge-297280) DBG | Getting to WaitForSSH function...
	I1028 13:21:24.329161  146109 main.go:141] libmachine: (bridge-297280) Reserved static IP address: 192.168.39.112
	I1028 13:21:24.329173  146109 main.go:141] libmachine: (bridge-297280) Waiting for SSH to be available...
	I1028 13:21:24.332308  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.332773  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.332802  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.332929  146109 main.go:141] libmachine: (bridge-297280) DBG | Using SSH client type: external
	I1028 13:21:24.332957  146109 main.go:141] libmachine: (bridge-297280) DBG | Using SSH private key: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa (-rw-------)
	I1028 13:21:24.333010  146109 main.go:141] libmachine: (bridge-297280) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 13:21:24.333034  146109 main.go:141] libmachine: (bridge-297280) DBG | About to run SSH command:
	I1028 13:21:24.333051  146109 main.go:141] libmachine: (bridge-297280) DBG | exit 0
	I1028 13:21:24.455203  146109 main.go:141] libmachine: (bridge-297280) DBG | SSH cmd err, output: <nil>: 
	I1028 13:21:24.455478  146109 main.go:141] libmachine: (bridge-297280) KVM machine creation complete!
	I1028 13:21:24.455756  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:21:24.456324  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:24.456487  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:24.456675  146109 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 13:21:24.456692  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:24.458016  146109 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 13:21:24.458028  146109 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 13:21:24.458033  146109 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 13:21:24.458038  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.460510  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.460899  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.460922  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.461102  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.461248  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.461427  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.461561  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.461715  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.461917  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.461928  146109 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 13:21:24.562870  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:21:24.562896  146109 main.go:141] libmachine: Detecting the provisioner...
	I1028 13:21:24.562903  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.565856  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.566275  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.566302  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.566485  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.566704  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.566898  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.567051  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.567222  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.567448  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.567463  146109 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 13:21:24.667804  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 13:21:24.667888  146109 main.go:141] libmachine: found compatible host: buildroot
	I1028 13:21:24.667898  146109 main.go:141] libmachine: Provisioning with buildroot...
	I1028 13:21:24.667905  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.668136  146109 buildroot.go:166] provisioning hostname "bridge-297280"
	I1028 13:21:24.668178  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.668373  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.671143  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.671526  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.671566  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.671676  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.671850  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.672013  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.672134  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.672297  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.672544  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.672562  146109 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-297280 && echo "bridge-297280" | sudo tee /etc/hostname
	I1028 13:21:24.785381  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-297280
	
	I1028 13:21:24.785409  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.788208  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.788581  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.788620  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.788718  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:24.788896  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.789033  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:24.789163  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:24.789349  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:24.789565  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:24.789583  146109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-297280' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-297280/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-297280' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 13:21:24.895789  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 13:21:24.895821  146109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19875-77800/.minikube CaCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19875-77800/.minikube}
	I1028 13:21:24.895915  146109 buildroot.go:174] setting up certificates
	I1028 13:21:24.895928  146109 provision.go:84] configureAuth start
	I1028 13:21:24.895942  146109 main.go:141] libmachine: (bridge-297280) Calling .GetMachineName
	I1028 13:21:24.896238  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:24.898957  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.899338  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.899366  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.899492  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:24.901788  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.902139  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:24.902164  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:24.902290  146109 provision.go:143] copyHostCerts
	I1028 13:21:24.902380  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem, removing ...
	I1028 13:21:24.902398  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem
	I1028 13:21:24.902478  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/ca.pem (1082 bytes)
	I1028 13:21:24.902600  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem, removing ...
	I1028 13:21:24.902611  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem
	I1028 13:21:24.902655  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/cert.pem (1123 bytes)
	I1028 13:21:24.902744  146109 exec_runner.go:144] found /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem, removing ...
	I1028 13:21:24.902753  146109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem
	I1028 13:21:24.902788  146109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19875-77800/.minikube/key.pem (1679 bytes)
	I1028 13:21:24.902884  146109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem org=jenkins.bridge-297280 san=[127.0.0.1 192.168.39.112 bridge-297280 localhost minikube]
	I1028 13:21:25.140172  146109 provision.go:177] copyRemoteCerts
	I1028 13:21:25.140236  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 13:21:25.140261  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.142733  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.143073  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.143097  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.143240  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.143457  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.143642  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.143765  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.221455  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1028 13:21:25.244215  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 13:21:25.268467  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 13:21:25.289137  146109 provision.go:87] duration metric: took 393.193977ms to configureAuth
	I1028 13:21:25.289160  146109 buildroot.go:189] setting minikube options for container-runtime
	I1028 13:21:25.289305  146109 config.go:182] Loaded profile config "bridge-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:25.289395  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.292192  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.292696  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.292726  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.292860  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.293050  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.293196  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.293335  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.293479  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:25.293711  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:25.293732  146109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 13:21:25.500053  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 13:21:25.500095  146109 main.go:141] libmachine: Checking connection to Docker...
	I1028 13:21:25.500106  146109 main.go:141] libmachine: (bridge-297280) Calling .GetURL
	I1028 13:21:25.501161  146109 main.go:141] libmachine: (bridge-297280) DBG | Using libvirt version 6000000
	I1028 13:21:25.503297  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.503698  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.503739  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.503853  146109 main.go:141] libmachine: Docker is up and running!
	I1028 13:21:25.503871  146109 main.go:141] libmachine: Reticulating splines...
	I1028 13:21:25.503881  146109 client.go:171] duration metric: took 25.675293626s to LocalClient.Create
	I1028 13:21:25.503909  146109 start.go:167] duration metric: took 25.675366229s to libmachine.API.Create "bridge-297280"
	I1028 13:21:25.503922  146109 start.go:293] postStartSetup for "bridge-297280" (driver="kvm2")
	I1028 13:21:25.503935  146109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 13:21:25.503956  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.504185  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 13:21:25.504229  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.506718  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.507089  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.507115  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.507257  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.507422  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.507564  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.507729  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.585186  146109 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 13:21:25.589181  146109 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 13:21:25.589204  146109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/addons for local assets ...
	I1028 13:21:25.589261  146109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19875-77800/.minikube/files for local assets ...
	I1028 13:21:25.589343  146109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem -> 849652.pem in /etc/ssl/certs
	I1028 13:21:25.589440  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 13:21:25.599094  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:21:25.622862  146109 start.go:296] duration metric: took 118.923974ms for postStartSetup
	I1028 13:21:25.622922  146109 main.go:141] libmachine: (bridge-297280) Calling .GetConfigRaw
	I1028 13:21:25.623541  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:25.625958  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.626346  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.626380  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.626569  146109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/config.json ...
	I1028 13:21:25.626775  146109 start.go:128] duration metric: took 25.819861563s to createHost
	I1028 13:21:25.626803  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.629111  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.629433  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.629463  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.629601  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.629768  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.629912  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.630087  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.630247  146109 main.go:141] libmachine: Using SSH client type: native
	I1028 13:21:25.630428  146109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I1028 13:21:25.630444  146109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 13:21:25.731923  146109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730121685.709299173
	
	I1028 13:21:25.731947  146109 fix.go:216] guest clock: 1730121685.709299173
	I1028 13:21:25.731957  146109 fix.go:229] Guest: 2024-10-28 13:21:25.709299173 +0000 UTC Remote: 2024-10-28 13:21:25.626789068 +0000 UTC m=+25.939003285 (delta=82.510105ms)
	I1028 13:21:25.732013  146109 fix.go:200] guest clock delta is within tolerance: 82.510105ms
	I1028 13:21:25.732025  146109 start.go:83] releasing machines lock for "bridge-297280", held for 25.925238039s
	I1028 13:21:25.732056  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.732342  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:25.734684  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.734994  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.735020  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.735193  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735677  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735839  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:25.735930  146109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 13:21:25.735991  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.736098  146109 ssh_runner.go:195] Run: cat /version.json
	I1028 13:21:25.736123  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:25.738890  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739070  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739322  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.739356  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739451  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.739483  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:25.739513  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:25.739605  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.739701  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:25.739778  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.739841  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:25.739906  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.739955  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:25.740103  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:25.811976  146109 ssh_runner.go:195] Run: systemctl --version
	I1028 13:21:25.836052  146109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 13:21:25.988527  146109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 13:21:25.994625  146109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 13:21:25.994697  146109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 13:21:26.009575  146109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 13:21:26.009608  146109 start.go:495] detecting cgroup driver to use...
	I1028 13:21:26.009692  146109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 13:21:26.027471  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 13:21:26.039855  146109 docker.go:217] disabling cri-docker service (if available) ...
	I1028 13:21:26.039903  146109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 13:21:26.052266  146109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 13:21:26.064513  146109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 13:21:26.179689  146109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 13:21:26.329610  146109 docker.go:233] disabling docker service ...
	I1028 13:21:26.329697  146109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 13:21:26.343046  146109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 13:21:26.354840  146109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 13:21:26.500546  146109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 13:21:26.629347  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 13:21:26.646273  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 13:21:26.664485  146109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 13:21:26.664551  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.675273  146109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 13:21:26.675335  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.685750  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.695352  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.704981  146109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 13:21:26.715492  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.725307  146109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.744718  146109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 13:21:26.757333  146109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 13:21:26.767238  146109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 13:21:26.767303  146109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 13:21:26.781126  146109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 13:21:26.790731  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:26.930586  146109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 13:21:27.022198  146109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 13:21:27.022271  146109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 13:21:27.027112  146109 start.go:563] Will wait 60s for crictl version
	I1028 13:21:27.027178  146109 ssh_runner.go:195] Run: which crictl
	I1028 13:21:27.031088  146109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 13:21:27.075908  146109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 13:21:27.075987  146109 ssh_runner.go:195] Run: crio --version
	I1028 13:21:27.106900  146109 ssh_runner.go:195] Run: crio --version
	I1028 13:21:27.143942  146109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 13:21:25.073701  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:27.077344  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:27.145246  146109 main.go:141] libmachine: (bridge-297280) Calling .GetIP
	I1028 13:21:27.148446  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:27.148796  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:27.148830  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:27.149063  146109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 13:21:27.153052  146109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:21:27.167591  146109 kubeadm.go:883] updating cluster {Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 13:21:27.167737  146109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 13:21:27.167785  146109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:21:27.201486  146109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 13:21:27.201560  146109 ssh_runner.go:195] Run: which lz4
	I1028 13:21:27.205413  146109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 13:21:27.209442  146109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 13:21:27.209475  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 13:21:28.417320  146109 crio.go:462] duration metric: took 1.211930429s to copy over tarball
	I1028 13:21:28.417419  146109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 13:21:30.541806  146109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.12433452s)
	I1028 13:21:30.541849  146109 crio.go:469] duration metric: took 2.124498629s to extract the tarball
	I1028 13:21:30.541861  146109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 13:21:30.580967  146109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 13:21:30.620580  146109 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 13:21:30.620611  146109 cache_images.go:84] Images are preloaded, skipping loading
	I1028 13:21:30.620622  146109 kubeadm.go:934] updating node { 192.168.39.112 8443 v1.31.2 crio true true} ...
	I1028 13:21:30.620757  146109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-297280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1028 13:21:30.620851  146109 ssh_runner.go:195] Run: crio config
	I1028 13:21:30.668093  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:21:30.668125  146109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 13:21:30.668155  146109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-297280 NodeName:bridge-297280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 13:21:30.668310  146109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-297280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.112"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 13:21:30.668391  146109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 13:21:30.677903  146109 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 13:21:30.677965  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 13:21:30.686535  146109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 13:21:30.701888  146109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 13:21:30.719347  146109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 13:21:30.737627  146109 ssh_runner.go:195] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I1028 13:21:30.741621  146109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 13:21:30.754300  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:30.881740  146109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:30.899697  146109 certs.go:68] Setting up /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280 for IP: 192.168.39.112
	I1028 13:21:30.899718  146109 certs.go:194] generating shared ca certs ...
	I1028 13:21:30.899734  146109 certs.go:226] acquiring lock for ca certs: {Name:mk37e67f21d9c5a4d685aa017451e1d81f674a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.899892  146109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key
	I1028 13:21:30.899932  146109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key
	I1028 13:21:30.899942  146109 certs.go:256] generating profile certs ...
	I1028 13:21:30.899994  146109 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key
	I1028 13:21:30.900007  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt with IP's: []
	I1028 13:21:30.987550  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt ...
	I1028 13:21:30.987586  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.crt: {Name:mk79a6093853f2cde5aa1baf0f2bc6f508cee547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.987783  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key ...
	I1028 13:21:30.987798  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/client.key: {Name:mk09f697959641408c65ca0388fc1d990b962a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:30.987880  146109 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791
	I1028 13:21:30.987896  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.112]
	I1028 13:21:31.188372  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 ...
	I1028 13:21:31.188403  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791: {Name:mk496324e33762e58876509195201ae7807339d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.188571  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791 ...
	I1028 13:21:31.188587  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791: {Name:mka0264ae6a0338ceffb7420c63e6b9a4b434e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.188665  146109 certs.go:381] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt.2f461791 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt
	I1028 13:21:31.188760  146109 certs.go:385] copying /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key.2f461791 -> /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key
	I1028 13:21:31.188816  146109 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key
	I1028 13:21:31.188831  146109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt with IP's: []
	I1028 13:21:31.650341  146109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt ...
	I1028 13:21:31.650377  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt: {Name:mk2f355024f7f8b979d837a3536f0df783524eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.650553  146109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key ...
	I1028 13:21:31.650563  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key: {Name:mk428d65df121e82dbcfe11b89d556b50be8b966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:31.650736  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem (1338 bytes)
	W1028 13:21:31.650774  146109 certs.go:480] ignoring /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965_empty.pem, impossibly tiny 0 bytes
	I1028 13:21:31.650783  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca-key.pem (1679 bytes)
	I1028 13:21:31.650805  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/ca.pem (1082 bytes)
	I1028 13:21:31.650831  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/cert.pem (1123 bytes)
	I1028 13:21:31.650854  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/certs/key.pem (1679 bytes)
	I1028 13:21:31.650889  146109 certs.go:484] found cert: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem (1708 bytes)
	I1028 13:21:31.651462  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 13:21:31.679781  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 13:21:31.702015  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 13:21:31.732449  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 13:21:31.754292  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 13:21:31.776328  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 13:21:31.798701  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 13:21:31.820197  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/bridge-297280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 13:21:31.841519  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/certs/84965.pem --> /usr/share/ca-certificates/84965.pem (1338 bytes)
	I1028 13:21:31.863529  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/ssl/certs/849652.pem --> /usr/share/ca-certificates/849652.pem (1708 bytes)
	I1028 13:21:31.885112  146109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 13:21:31.906558  146109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 13:21:31.920959  146109 ssh_runner.go:195] Run: openssl version
	I1028 13:21:31.926007  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/849652.pem && ln -fs /usr/share/ca-certificates/849652.pem /etc/ssl/certs/849652.pem"
	I1028 13:21:31.936992  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.941092  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:49 /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.941146  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/849652.pem
	I1028 13:21:31.946538  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/849652.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 13:21:31.956356  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 13:21:31.965919  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.969858  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.969910  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 13:21:31.975148  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 13:21:31.984781  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84965.pem && ln -fs /usr/share/ca-certificates/84965.pem /etc/ssl/certs/84965.pem"
	I1028 13:21:31.994314  146109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84965.pem
	I1028 13:21:31.998078  146109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:49 /usr/share/ca-certificates/84965.pem
	I1028 13:21:31.998118  146109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84965.pem
	I1028 13:21:32.003062  146109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84965.pem /etc/ssl/certs/51391683.0"
	I1028 13:21:32.012656  146109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 13:21:32.016062  146109 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 13:21:32.016122  146109 kubeadm.go:392] StartCluster: {Name:bridge-297280 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:bridge-297280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 13:21:32.016222  146109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 13:21:32.016285  146109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 13:21:32.052814  146109 cri.go:89] found id: ""
	I1028 13:21:32.052890  146109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 13:21:32.062470  146109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 13:21:32.072842  146109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 13:21:32.085188  146109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 13:21:32.085206  146109 kubeadm.go:157] found existing configuration files:
	
	I1028 13:21:32.085243  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 13:21:32.094795  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 13:21:32.094865  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 13:21:32.105438  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 13:21:32.114693  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 13:21:32.114751  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 13:21:32.125368  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 13:21:32.134360  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 13:21:32.134425  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 13:21:32.143533  146109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 13:21:32.152254  146109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 13:21:32.152312  146109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 13:21:32.161290  146109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 13:21:32.213514  146109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 13:21:32.213652  146109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 13:21:32.309172  146109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 13:21:32.309295  146109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 13:21:32.309415  146109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 13:21:32.320530  146109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 13:21:29.574046  144327 pod_ready.go:103] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:31.073720  144327 pod_ready.go:93] pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.073745  144327 pod_ready.go:82] duration metric: took 12.506663632s for pod "coredns-7c65d6cfc9-dj9l8" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.073759  144327 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.077961  144327 pod_ready.go:93] pod "etcd-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.077980  144327 pod_ready.go:82] duration metric: took 4.214913ms for pod "etcd-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.077988  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.082443  144327 pod_ready.go:93] pod "kube-apiserver-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.082460  144327 pod_ready.go:82] duration metric: took 4.466366ms for pod "kube-apiserver-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.082468  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.086562  144327 pod_ready.go:93] pod "kube-controller-manager-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.086589  144327 pod_ready.go:82] duration metric: took 4.113046ms for pod "kube-controller-manager-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.086600  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-w25fl" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.090417  144327 pod_ready.go:93] pod "kube-proxy-w25fl" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.090434  144327 pod_ready.go:82] duration metric: took 3.826364ms for pod "kube-proxy-w25fl" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.090442  144327 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.472296  144327 pod_ready.go:93] pod "kube-scheduler-flannel-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:21:31.472320  144327 pod_ready.go:82] duration metric: took 381.871698ms for pod "kube-scheduler-flannel-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:31.472330  144327 pod_ready.go:39] duration metric: took 12.913532889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:31.472345  144327 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:21:31.472398  144327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:21:31.489350  144327 api_server.go:72] duration metric: took 21.686135288s to wait for apiserver process to appear ...
	I1028 13:21:31.489385  144327 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:21:31.489412  144327 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1028 13:21:31.493639  144327 api_server.go:279] https://192.168.50.159:8443/healthz returned 200:
	ok
	I1028 13:21:31.494587  144327 api_server.go:141] control plane version: v1.31.2
	I1028 13:21:31.494612  144327 api_server.go:131] duration metric: took 5.218433ms to wait for apiserver health ...
	I1028 13:21:31.494622  144327 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:21:31.674868  144327 system_pods.go:59] 7 kube-system pods found
	I1028 13:21:31.674906  144327 system_pods.go:61] "coredns-7c65d6cfc9-dj9l8" [827e8aa3-3be8-4683-909b-e1ae71a5e4ca] Running
	I1028 13:21:31.674915  144327 system_pods.go:61] "etcd-flannel-297280" [0d814ce7-0894-461e-a6f1-c5aeef16179b] Running
	I1028 13:21:31.674920  144327 system_pods.go:61] "kube-apiserver-flannel-297280" [7c428754-bd41-41f2-807f-6e382c3a9f98] Running
	I1028 13:21:31.674924  144327 system_pods.go:61] "kube-controller-manager-flannel-297280" [0c2bf4a8-3fc9-4540-80f7-914f70794f35] Running
	I1028 13:21:31.674929  144327 system_pods.go:61] "kube-proxy-w25fl" [1d762705-572a-4f70-a6a7-cd2609806ff4] Running
	I1028 13:21:31.674933  144327 system_pods.go:61] "kube-scheduler-flannel-297280" [45bc3533-f30b-4238-a30b-2e219ffc864b] Running
	I1028 13:21:31.674937  144327 system_pods.go:61] "storage-provisioner" [2511defb-d9ca-46b0-a02a-6ddf77363fa2] Running
	I1028 13:21:31.674946  144327 system_pods.go:74] duration metric: took 180.316718ms to wait for pod list to return data ...
	I1028 13:21:31.674957  144327 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:21:31.872025  144327 default_sa.go:45] found service account: "default"
	I1028 13:21:31.872059  144327 default_sa.go:55] duration metric: took 197.092111ms for default service account to be created ...
	I1028 13:21:31.872073  144327 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:21:32.075282  144327 system_pods.go:86] 7 kube-system pods found
	I1028 13:21:32.075308  144327 system_pods.go:89] "coredns-7c65d6cfc9-dj9l8" [827e8aa3-3be8-4683-909b-e1ae71a5e4ca] Running
	I1028 13:21:32.075317  144327 system_pods.go:89] "etcd-flannel-297280" [0d814ce7-0894-461e-a6f1-c5aeef16179b] Running
	I1028 13:21:32.075323  144327 system_pods.go:89] "kube-apiserver-flannel-297280" [7c428754-bd41-41f2-807f-6e382c3a9f98] Running
	I1028 13:21:32.075330  144327 system_pods.go:89] "kube-controller-manager-flannel-297280" [0c2bf4a8-3fc9-4540-80f7-914f70794f35] Running
	I1028 13:21:32.075354  144327 system_pods.go:89] "kube-proxy-w25fl" [1d762705-572a-4f70-a6a7-cd2609806ff4] Running
	I1028 13:21:32.075363  144327 system_pods.go:89] "kube-scheduler-flannel-297280" [45bc3533-f30b-4238-a30b-2e219ffc864b] Running
	I1028 13:21:32.075368  144327 system_pods.go:89] "storage-provisioner" [2511defb-d9ca-46b0-a02a-6ddf77363fa2] Running
	I1028 13:21:32.075379  144327 system_pods.go:126] duration metric: took 203.300304ms to wait for k8s-apps to be running ...
	I1028 13:21:32.075392  144327 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:21:32.075455  144327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:21:32.091407  144327 system_svc.go:56] duration metric: took 16.004372ms WaitForService to wait for kubelet
	I1028 13:21:32.091443  144327 kubeadm.go:582] duration metric: took 22.288245052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:21:32.091471  144327 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:21:32.272052  144327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:21:32.272081  144327 node_conditions.go:123] node cpu capacity is 2
	I1028 13:21:32.272092  144327 node_conditions.go:105] duration metric: took 180.614791ms to run NodePressure ...
	I1028 13:21:32.272105  144327 start.go:241] waiting for startup goroutines ...
	I1028 13:21:32.272111  144327 start.go:246] waiting for cluster config update ...
	I1028 13:21:32.272121  144327 start.go:255] writing updated cluster config ...
	I1028 13:21:32.389776  144327 ssh_runner.go:195] Run: rm -f paused
	I1028 13:21:32.453364  144327 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:21:32.598061  144327 out.go:177] * Done! kubectl is now configured to use "flannel-297280" cluster and "default" namespace by default
	I1028 13:21:32.449773  146109 out.go:235]   - Generating certificates and keys ...
	I1028 13:21:32.449921  146109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 13:21:32.450013  146109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 13:21:32.526386  146109 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 13:21:32.701429  146109 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 13:21:32.987102  146109 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 13:21:33.144164  146109 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 13:21:33.634779  146109 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 13:21:33.634959  146109 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-297280 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I1028 13:21:33.787442  146109 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 13:21:33.787554  146109 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-297280 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I1028 13:21:33.889788  146109 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 13:21:33.953480  146109 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 13:21:34.480108  146109 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 13:21:34.480335  146109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 13:21:34.629223  146109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 13:21:34.944761  146109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 13:21:35.474045  146109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 13:21:35.613778  146109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 13:21:35.851081  146109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 13:21:35.851760  146109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 13:21:35.857087  146109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 13:21:35.858729  146109 out.go:235]   - Booting up control plane ...
	I1028 13:21:35.858855  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 13:21:35.858973  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 13:21:35.859145  146109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 13:21:35.879880  146109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 13:21:35.889596  146109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 13:21:35.889664  146109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 13:21:36.049895  146109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 13:21:36.050130  146109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 13:21:36.551969  146109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.988197ms
	I1028 13:21:36.552055  146109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 13:21:42.053142  146109 kubeadm.go:310] [api-check] The API server is healthy after 5.502178141s
	I1028 13:21:42.066497  146109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 13:21:42.085850  146109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 13:21:42.158319  146109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 13:21:42.158576  146109 kubeadm.go:310] [mark-control-plane] Marking the node bridge-297280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 13:21:42.191377  146109 kubeadm.go:310] [bootstrap-token] Using token: 90lr9g.qn73b7ozx49ax2he
	I1028 13:21:42.192796  146109 out.go:235]   - Configuring RBAC rules ...
	I1028 13:21:42.192951  146109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 13:21:42.209600  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 13:21:42.229224  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 13:21:42.235513  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 13:21:42.239838  146109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 13:21:42.249492  146109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 13:21:42.462811  146109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 13:21:43.161813  146109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 13:21:43.462072  146109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 13:21:43.462999  146109 kubeadm.go:310] 
	I1028 13:21:43.463081  146109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 13:21:43.463090  146109 kubeadm.go:310] 
	I1028 13:21:43.463187  146109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 13:21:43.463193  146109 kubeadm.go:310] 
	I1028 13:21:43.463230  146109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 13:21:43.463300  146109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 13:21:43.463380  146109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 13:21:43.463415  146109 kubeadm.go:310] 
	I1028 13:21:43.463526  146109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 13:21:43.463544  146109 kubeadm.go:310] 
	I1028 13:21:43.463610  146109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 13:21:43.463618  146109 kubeadm.go:310] 
	I1028 13:21:43.463706  146109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 13:21:43.463804  146109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 13:21:43.463887  146109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 13:21:43.463897  146109 kubeadm.go:310] 
	I1028 13:21:43.463985  146109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 13:21:43.464102  146109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 13:21:43.464119  146109 kubeadm.go:310] 
	I1028 13:21:43.464235  146109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 90lr9g.qn73b7ozx49ax2he \
	I1028 13:21:43.464385  146109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 \
	I1028 13:21:43.464417  146109 kubeadm.go:310] 	--control-plane 
	I1028 13:21:43.464427  146109 kubeadm.go:310] 
	I1028 13:21:43.464542  146109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 13:21:43.464551  146109 kubeadm.go:310] 
	I1028 13:21:43.464658  146109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 90lr9g.qn73b7ozx49ax2he \
	I1028 13:21:43.464824  146109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:beaaad6dcefec538b7301716d9e53a13a259ea356f04d05d91f8fb074b77da23 
	I1028 13:21:43.465570  146109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 13:21:43.465604  146109 cni.go:84] Creating CNI manager for "bridge"
	I1028 13:21:43.467222  146109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 13:21:43.468504  146109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 13:21:43.484876  146109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 13:21:43.511776  146109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 13:21:43.511924  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:43.511934  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-297280 minikube.k8s.io/updated_at=2024_10_28T13_21_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=bridge-297280 minikube.k8s.io/primary=true
	I1028 13:21:43.546899  146109 ops.go:34] apiserver oom_adj: -16
	I1028 13:21:43.608972  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:44.109937  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:44.609124  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:45.109712  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:45.609088  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:46.109103  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:46.609719  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:47.109301  146109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 13:21:47.183694  146109 kubeadm.go:1113] duration metric: took 3.671834423s to wait for elevateKubeSystemPrivileges
	I1028 13:21:47.183744  146109 kubeadm.go:394] duration metric: took 15.167628801s to StartCluster
	I1028 13:21:47.183770  146109 settings.go:142] acquiring lock: {Name:mk364f71ed22a657ba3b444d7de412d714d0c270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:47.183859  146109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 13:21:47.185196  146109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19875-77800/kubeconfig: {Name:mkdb1f6ea74f9d0f1a713dc3324ce2338814a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 13:21:47.185417  146109 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 13:21:47.185426  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 13:21:47.185489  146109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 13:21:47.185593  146109 addons.go:69] Setting storage-provisioner=true in profile "bridge-297280"
	I1028 13:21:47.185609  146109 addons.go:69] Setting default-storageclass=true in profile "bridge-297280"
	I1028 13:21:47.185615  146109 config.go:182] Loaded profile config "bridge-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 13:21:47.185648  146109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-297280"
	I1028 13:21:47.185613  146109 addons.go:234] Setting addon storage-provisioner=true in "bridge-297280"
	I1028 13:21:47.185755  146109 host.go:66] Checking if "bridge-297280" exists ...
	I1028 13:21:47.186106  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.186148  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.186157  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.186184  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.187270  146109 out.go:177] * Verifying Kubernetes components...
	I1028 13:21:47.188728  146109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 13:21:47.201939  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I1028 13:21:47.202108  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I1028 13:21:47.202428  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.202582  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.202994  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.203021  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.203088  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.203099  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.203366  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.203430  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.203533  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.204023  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.204054  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.207386  146109 addons.go:234] Setting addon default-storageclass=true in "bridge-297280"
	I1028 13:21:47.207429  146109 host.go:66] Checking if "bridge-297280" exists ...
	I1028 13:21:47.207824  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.207867  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.224857  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I1028 13:21:47.225295  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.225918  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.225947  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.226458  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.226742  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I1028 13:21:47.226871  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.227255  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.227793  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.227817  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.228280  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.228950  146109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 13:21:47.228970  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:47.228989  146109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 13:21:47.231130  146109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 13:21:47.232676  146109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:47.232704  146109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 13:21:47.232726  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:47.236002  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.236524  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:47.236548  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.236612  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:47.236799  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:47.236967  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:47.237083  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:47.247783  146109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33597
	I1028 13:21:47.248301  146109 main.go:141] libmachine: () Calling .GetVersion
	I1028 13:21:47.248830  146109 main.go:141] libmachine: Using API Version  1
	I1028 13:21:47.248855  146109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 13:21:47.249221  146109 main.go:141] libmachine: () Calling .GetMachineName
	I1028 13:21:47.249435  146109 main.go:141] libmachine: (bridge-297280) Calling .GetState
	I1028 13:21:47.250978  146109 main.go:141] libmachine: (bridge-297280) Calling .DriverName
	I1028 13:21:47.251196  146109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:47.251215  146109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 13:21:47.251234  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHHostname
	I1028 13:21:47.253593  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.254041  146109 main.go:141] libmachine: (bridge-297280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5d:00", ip: ""} in network mk-bridge-297280: {Iface:virbr4 ExpiryTime:2024-10-28 14:21:14 +0000 UTC Type:0 Mac:52:54:00:d9:5d:00 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:bridge-297280 Clientid:01:52:54:00:d9:5d:00}
	I1028 13:21:47.254057  146109 main.go:141] libmachine: (bridge-297280) DBG | domain bridge-297280 has defined IP address 192.168.39.112 and MAC address 52:54:00:d9:5d:00 in network mk-bridge-297280
	I1028 13:21:47.254240  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHPort
	I1028 13:21:47.254488  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHKeyPath
	I1028 13:21:47.254652  146109 main.go:141] libmachine: (bridge-297280) Calling .GetSSHUsername
	I1028 13:21:47.254773  146109 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/bridge-297280/id_rsa Username:docker}
	I1028 13:21:47.387303  146109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 13:21:47.392573  146109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 13:21:47.521581  146109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 13:21:47.523934  146109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 13:21:47.868498  146109 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 13:21:47.868675  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.868706  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.869003  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.869022  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.869037  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.869044  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.869326  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.869342  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.869910  146109 node_ready.go:35] waiting up to 15m0s for node "bridge-297280" to be "Ready" ...
	I1028 13:21:47.902383  146109 node_ready.go:49] node "bridge-297280" has status "Ready":"True"
	I1028 13:21:47.902408  146109 node_ready.go:38] duration metric: took 32.473404ms for node "bridge-297280" to be "Ready" ...
	I1028 13:21:47.902419  146109 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:21:47.927652  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:47.927685  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:47.927960  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:47.927979  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:47.927983  146109 main.go:141] libmachine: (bridge-297280) DBG | Closing plugin on server side
	I1028 13:21:47.931924  146109 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace to be "Ready" ...
	I1028 13:21:48.254058  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:48.254097  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:48.254462  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:48.254490  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:48.254500  146109 main.go:141] libmachine: Making call to close driver server
	I1028 13:21:48.254513  146109 main.go:141] libmachine: (bridge-297280) Calling .Close
	I1028 13:21:48.254872  146109 main.go:141] libmachine: (bridge-297280) DBG | Closing plugin on server side
	I1028 13:21:48.254976  146109 main.go:141] libmachine: Successfully made call to close driver server
	I1028 13:21:48.254996  146109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 13:21:48.256591  146109 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 13:21:48.257768  146109 addons.go:510] duration metric: took 1.072279574s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 13:21:48.374965  146109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-297280" context rescaled to 1 replicas
	I1028 13:21:49.942863  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:52.437501  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:54.438249  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:56.439499  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:21:58.941152  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:01.438688  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:03.438742  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:05.937699  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:07.938403  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:10.438349  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:12.438734  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:14.938257  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:16.938345  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:18.938663  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:21.437429  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:23.437471  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:25.438094  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:27.937981  146109 pod_ready.go:103] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"False"
	I1028 13:22:29.938183  146109 pod_ready.go:93] pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.938210  146109 pod_ready.go:82] duration metric: took 42.006254407s for pod "coredns-7c65d6cfc9-sv82x" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.938223  146109 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.939711  146109 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vg67t" not found
	I1028 13:22:29.939738  146109 pod_ready.go:82] duration metric: took 1.50706ms for pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace to be "Ready" ...
	E1028 13:22:29.939750  146109 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-vg67t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vg67t" not found
	I1028 13:22:29.939760  146109 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.943344  146109 pod_ready.go:93] pod "etcd-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.943366  146109 pod_ready.go:82] duration metric: took 3.598317ms for pod "etcd-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.943378  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.947006  146109 pod_ready.go:93] pod "kube-apiserver-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.947024  146109 pod_ready.go:82] duration metric: took 3.639746ms for pod "kube-apiserver-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.947032  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.951162  146109 pod_ready.go:93] pod "kube-controller-manager-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:29.951177  146109 pod_ready.go:82] duration metric: took 4.13895ms for pod "kube-controller-manager-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:29.951186  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b5p9h" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.135866  146109 pod_ready.go:93] pod "kube-proxy-b5p9h" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:30.135893  146109 pod_ready.go:82] duration metric: took 184.69985ms for pod "kube-proxy-b5p9h" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.135905  146109 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.535566  146109 pod_ready.go:93] pod "kube-scheduler-bridge-297280" in "kube-system" namespace has status "Ready":"True"
	I1028 13:22:30.535596  146109 pod_ready.go:82] duration metric: took 399.681902ms for pod "kube-scheduler-bridge-297280" in "kube-system" namespace to be "Ready" ...
	I1028 13:22:30.535606  146109 pod_ready.go:39] duration metric: took 42.633175047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 13:22:30.535640  146109 api_server.go:52] waiting for apiserver process to appear ...
	I1028 13:22:30.535704  146109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 13:22:30.551176  146109 api_server.go:72] duration metric: took 43.365726528s to wait for apiserver process to appear ...
	I1028 13:22:30.551199  146109 api_server.go:88] waiting for apiserver healthz status ...
	I1028 13:22:30.551217  146109 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I1028 13:22:30.555201  146109 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I1028 13:22:30.556230  146109 api_server.go:141] control plane version: v1.31.2
	I1028 13:22:30.556252  146109 api_server.go:131] duration metric: took 5.046545ms to wait for apiserver health ...
	I1028 13:22:30.556259  146109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 13:22:30.737626  146109 system_pods.go:59] 7 kube-system pods found
	I1028 13:22:30.737657  146109 system_pods.go:61] "coredns-7c65d6cfc9-sv82x" [22d9237d-92c8-4542-b976-af11fa5afab7] Running
	I1028 13:22:30.737662  146109 system_pods.go:61] "etcd-bridge-297280" [23d6720b-8493-48cd-a204-ed22e0c2b9ed] Running
	I1028 13:22:30.737666  146109 system_pods.go:61] "kube-apiserver-bridge-297280" [bb0c5106-3889-4672-aa65-7f2caea88565] Running
	I1028 13:22:30.737669  146109 system_pods.go:61] "kube-controller-manager-bridge-297280" [216e6bd4-d04a-4d9f-b00b-7fbad2734c5e] Running
	I1028 13:22:30.737672  146109 system_pods.go:61] "kube-proxy-b5p9h" [096a84b7-c39f-4fcd-8fc5-f5600efb7c46] Running
	I1028 13:22:30.737675  146109 system_pods.go:61] "kube-scheduler-bridge-297280" [c1ad345e-15ee-4f9e-9119-aef6c9571774] Running
	I1028 13:22:30.737678  146109 system_pods.go:61] "storage-provisioner" [9d198ae2-6ec7-4cd9-98f7-d70cdd12e133] Running
	I1028 13:22:30.737683  146109 system_pods.go:74] duration metric: took 181.419259ms to wait for pod list to return data ...
	I1028 13:22:30.737689  146109 default_sa.go:34] waiting for default service account to be created ...
	I1028 13:22:30.935442  146109 default_sa.go:45] found service account: "default"
	I1028 13:22:30.935468  146109 default_sa.go:55] duration metric: took 197.773341ms for default service account to be created ...
	I1028 13:22:30.935477  146109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 13:22:31.138478  146109 system_pods.go:86] 7 kube-system pods found
	I1028 13:22:31.138516  146109 system_pods.go:89] "coredns-7c65d6cfc9-sv82x" [22d9237d-92c8-4542-b976-af11fa5afab7] Running
	I1028 13:22:31.138526  146109 system_pods.go:89] "etcd-bridge-297280" [23d6720b-8493-48cd-a204-ed22e0c2b9ed] Running
	I1028 13:22:31.138537  146109 system_pods.go:89] "kube-apiserver-bridge-297280" [bb0c5106-3889-4672-aa65-7f2caea88565] Running
	I1028 13:22:31.138547  146109 system_pods.go:89] "kube-controller-manager-bridge-297280" [216e6bd4-d04a-4d9f-b00b-7fbad2734c5e] Running
	I1028 13:22:31.138555  146109 system_pods.go:89] "kube-proxy-b5p9h" [096a84b7-c39f-4fcd-8fc5-f5600efb7c46] Running
	I1028 13:22:31.138562  146109 system_pods.go:89] "kube-scheduler-bridge-297280" [c1ad345e-15ee-4f9e-9119-aef6c9571774] Running
	I1028 13:22:31.138573  146109 system_pods.go:89] "storage-provisioner" [9d198ae2-6ec7-4cd9-98f7-d70cdd12e133] Running
	I1028 13:22:31.138588  146109 system_pods.go:126] duration metric: took 203.103805ms to wait for k8s-apps to be running ...
	I1028 13:22:31.138601  146109 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 13:22:31.138667  146109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 13:22:31.152706  146109 system_svc.go:56] duration metric: took 14.09764ms WaitForService to wait for kubelet
	I1028 13:22:31.152735  146109 kubeadm.go:582] duration metric: took 43.96729077s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 13:22:31.152756  146109 node_conditions.go:102] verifying NodePressure condition ...
	I1028 13:22:31.336420  146109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 13:22:31.336451  146109 node_conditions.go:123] node cpu capacity is 2
	I1028 13:22:31.336466  146109 node_conditions.go:105] duration metric: took 183.704965ms to run NodePressure ...
	I1028 13:22:31.336478  146109 start.go:241] waiting for startup goroutines ...
	I1028 13:22:31.336484  146109 start.go:246] waiting for cluster config update ...
	I1028 13:22:31.336494  146109 start.go:255] writing updated cluster config ...
	I1028 13:22:31.336762  146109 ssh_runner.go:195] Run: rm -f paused
	I1028 13:22:31.384224  146109 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 13:22:31.386367  146109 out.go:177] * Done! kubectl is now configured to use "bridge-297280" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.531761817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4296a888-d6c1-4c9a-b129-f47dc05d3ab6 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.532861146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fdb5398-d5b4-4d88-9e31-5a83423001f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.533254311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122487533232281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fdb5398-d5b4-4d88-9e31-5a83423001f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.533863092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=564f5a1a-9cc8-4318-854a-794bd4bbe881 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.533915362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=564f5a1a-9cc8-4318-854a-794bd4bbe881 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.534097876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=564f5a1a-9cc8-4318-854a-794bd4bbe881 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.547160080Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc832b88-6e9d-459f-936b-c63c71fbf3fe name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.547584538Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x8gvd,Uid:4498824f-7ce1-4167-8701-74cadd3fa83c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121224355353120,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492567745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5f19d0ea-554f-4583-897a-132f6a43d88b,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1730121224351508442,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492562945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f3d0a479730f9c4e335ab9f17c492cdaa4f4472e0fd099cc7503f0923b1f22f,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-rkx62,Uid:31c37fb4-0650-481d-b1e3-4956769450d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121222558112040,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-rkx62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31c37fb4-0650-481d-b1e3-4956769450d8,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28
T13:13:36.492561428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&PodSandboxMetadata{Name:kube-proxy-ff797,Uid:ed2dce0b-4dc9-406e-a9c3-f91d75fa0876,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121216807089520,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4dc9-406e-a9c3-f91d75fa0876,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T13:13:36.492568841Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:21a53238-251d-4581-b4c3-3a788545ab0c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121216804647081,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-10-28T13:13:36.492566530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-783661,Uid:929ab2ab8af58ab5ea6a58ca1ef52fdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212229956178,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef52fdc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.58:8444,kubernetes.io/config.hash: 929ab2ab8af58ab5ea6a58ca1ef52fdc,kubernetes.io/config.seen: 2024-10-28T13:13:31.482643543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead983
1,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-783661,Uid:a73d04d1b11db2bf4d653e46042d6066,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212130301599,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e46042d6066,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.58:2379,kubernetes.io/config.hash: a73d04d1b11db2bf4d653e46042d6066,kubernetes.io/config.seen: 2024-10-28T13:13:31.500021374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-783661,Uid:20ee62c2966c39846bf64f2c0aebb904,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212119085403,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb904,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 20ee62c2966c39846bf64f2c0aebb904,kubernetes.io/config.seen: 2024-10-28T13:13:31.482649685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-783661,Uid:670be21a8d7463c6cb8c9defbce8fe7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730121212114641018,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670be21a8d7463c6cb8c9defbce8fe7a,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 670be21a8d7463c6cb8c9defbce8fe7a,kubernetes.io/config.seen: 2024-10-28T13:13:31.482648277Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bc832b88-6e9d-459f-936b-c63c71fbf3fe name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.548416099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae532c4e-cbd8-45f5-8ed5-d49e902c3fe0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.548468887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae532c4e-cbd8-45f5-8ed5-d49e902c3fe0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.548629426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db
2bf4d653e46042d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab
5ea6a58ca1ef52fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846
bf64f2c0aebb904,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae532c4e-cbd8-45f5-8ed5-d49e902c3fe0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.570907895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30edd8fa-12a8-4315-8ae1-98799daf1be8 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.570965751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30edd8fa-12a8-4315-8ae1-98799daf1be8 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.572461647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54fd8e78-9b2a-4eb2-90bc-96997a2587f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.572814111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122487572795741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54fd8e78-9b2a-4eb2-90bc-96997a2587f4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.573329240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b362eb9b-692d-4372-b100-d9adc5d9ce5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.573418073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b362eb9b-692d-4372-b100-d9adc5d9ce5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.573600417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b362eb9b-692d-4372-b100-d9adc5d9ce5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.602628386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be71c450-165e-478b-81bf-ea9a4d6e4501 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.602685185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be71c450-165e-478b-81bf-ea9a4d6e4501 name=/runtime.v1.RuntimeService/Version
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.603847422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60ccc80a-5ae0-4216-8895-b79355826f53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.604495667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122487604475958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60ccc80a-5ae0-4216-8895-b79355826f53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.605066222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb990d14-e421-4ecc-9a3c-8143d103e8f8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.605112588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb990d14-e421-4ecc-9a3c-8143d103e8f8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 13:34:47 default-k8s-diff-port-783661 crio[703]: time="2024-10-28 13:34:47.605480062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730121247733644768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e09c1839e3c3965c878ca79fe0199b7648a7e2b226cb3d6882e8a7ff535868,PodSandboxId:e17779f35a09fd3742fbd224bad922f47bc32fb69ebfc07d022ad619c3448a4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730121226856940518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f19d0ea-554f-4583-897a-132f6a43d88b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8,PodSandboxId:f24eeae2d252ad970b59ff17f0d3bc2a89d7ba1cdec9e693a233bba288d0592b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730121224567001476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x8gvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4498824f-7ce1-4167-8701-74cadd3fa83c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604,PodSandboxId:773d59b76c20bda12414e36e8c45461385f478cca13cd68635d4092d5ea21f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730121216941912441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ff797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2dce0b-4
dc9-406e-a9c3-f91d75fa0876,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d,PodSandboxId:a2a1648969bb0b55909b0ff1369a61fc3bc3d5483e268578f562bd9eb6a87be9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730121216899852166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a53238-251d-4581-b4c3
-3a788545ab0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835,PodSandboxId:e9e8e12d510d98963429e6a0b9726b6d2e3d1c06a3f35d79c663720174f711b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730121212780549465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 670be21a8d7463c6cb8c9defbce8fe7a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a,PodSandboxId:6973b279778b0e9d763bfa5cb9c1669477c65c50e917d9724f771fe68ead9831,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730121212775989851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d04d1b11db2bf4d653e4604
2d6066,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc,PodSandboxId:91ea92ff3b0d2894ae7e222776c6371d01510779ff2476ca19b91e1c8d9ce9b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730121212767193271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929ab2ab8af58ab5ea6a58ca1ef5
2fdc,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f,PodSandboxId:524963d9655b6b34ad63f3b40f26ba4b110ca14d9836cc02f90346cb401d0ca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730121212771547312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-783661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ee62c2966c39846bf64f2c0aebb9
04,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb990d14-e421-4ecc-9a3c-8143d103e8f8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	390339ebf1058       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   a2a1648969bb0       storage-provisioner
	d3e09c1839e3c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   e17779f35a09f       busybox
	6c37109c5ef48       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   f24eeae2d252a       coredns-7c65d6cfc9-x8gvd
	b44db812a04c7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   773d59b76c20b       kube-proxy-ff797
	dd70cdc4a6892       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   a2a1648969bb0       storage-provisioner
	018b66943fe6d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   e9e8e12d510d9       kube-controller-manager-default-k8s-diff-port-783661
	7b0b68df1e367       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   6973b279778b0       etcd-default-k8s-diff-port-783661
	11560f139fa76       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   524963d9655b6       kube-scheduler-default-k8s-diff-port-783661
	c647572f5e66a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   91ea92ff3b0d2       kube-apiserver-default-k8s-diff-port-783661
	
	
	==> coredns [6c37109c5ef4843bbaafd3cc879a08f10151556e3cfae22c2cdb3065c5bd15f8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33796 - 46628 "HINFO IN 814899742147327372.6374675471951593904. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021446442s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-783661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-783661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=default-k8s-diff-port-783661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T13_05_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 13:05:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-783661
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 13:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 13:34:30 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 13:34:30 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 13:34:30 +0000   Mon, 28 Oct 2024 13:05:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 13:34:30 +0000   Mon, 28 Oct 2024 13:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.58
	  Hostname:    default-k8s-diff-port-783661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a3be741ed1c443d8f675efe86426771
	  System UUID:                3a3be741-ed1c-443d-8f67-5efe86426771
	  Boot ID:                    3e8c7c00-e5c0-4d8d-9c4e-6a33116d1720
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-x8gvd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-783661                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-783661             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-783661    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-ff797                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-783661             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-rkx62                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-783661 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-783661 event: Registered Node default-k8s-diff-port-783661 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-783661 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-783661 event: Registered Node default-k8s-diff-port-783661 in Controller
	
	
	==> dmesg <==
	[Oct28 13:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051019] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.859497] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.512631] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.581939] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.060380] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053667] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.184694] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.109678] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.241396] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +3.836319] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +2.365343] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.061629] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500880] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.411592] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[  +3.314668] kauditd_printk_skb: 64 callbacks suppressed
	[Oct28 13:14] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7b0b68df1e3670fd6234e11a1a697e3e683d53655d3a512c2a43dd51d28a225a] <==
	{"level":"info","ts":"2024-10-28T13:20:25.352793Z","caller":"traceutil/trace.go:171","msg":"trace[1749087699] transaction","detail":"{read_only:false; response_revision:904; number_of_response:1; }","duration":"217.228035ms","start":"2024-10-28T13:20:25.135552Z","end":"2024-10-28T13:20:25.352780Z","steps":["trace[1749087699] 'process raft request'  (duration: 216.884394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:25.352957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.170524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T13:20:25.353028Z","caller":"traceutil/trace.go:171","msg":"trace[1172242753] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:904; }","duration":"107.25385ms","start":"2024-10-28T13:20:25.245761Z","end":"2024-10-28T13:20:25.353015Z","steps":["trace[1172242753] 'agreement among raft nodes before linearized reading'  (duration: 107.136041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:25.353185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.616206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:25.353224Z","caller":"traceutil/trace.go:171","msg":"trace[1896114426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:904; }","duration":"192.660684ms","start":"2024-10-28T13:20:25.160557Z","end":"2024-10-28T13:20:25.353218Z","steps":["trace[1896114426] 'agreement among raft nodes before linearized reading'  (duration: 192.602093ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:55.817983Z","caller":"traceutil/trace.go:171","msg":"trace[327840434] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"239.85397ms","start":"2024-10-28T13:20:55.578109Z","end":"2024-10-28T13:20:55.817963Z","steps":["trace[327840434] 'process raft request'  (duration: 239.61347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:20:56.272847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.265625ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451593 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" mod_revision:919 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T13:20:56.273035Z","caller":"traceutil/trace.go:171","msg":"trace[1696391543] linearizableReadLoop","detail":"{readStateIndex:1049; appliedIndex:1048; }","duration":"112.070509ms","start":"2024-10-28T13:20:56.160953Z","end":"2024-10-28T13:20:56.273023Z","steps":["trace[1696391543] 'read index received'  (duration: 25.482µs)","trace[1696391543] 'applied index is now lower than readState.Index'  (duration: 112.038146ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:20:56.273133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.175713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:56.273170Z","caller":"traceutil/trace.go:171","msg":"trace[626551964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:928; }","duration":"112.216951ms","start":"2024-10-28T13:20:56.160948Z","end":"2024-10-28T13:20:56.273165Z","steps":["trace[626551964] 'agreement among raft nodes before linearized reading'  (duration: 112.128712ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T13:20:56.273413Z","caller":"traceutil/trace.go:171","msg":"trace[1575647690] transaction","detail":"{read_only:false; response_revision:928; number_of_response:1; }","duration":"631.648438ms","start":"2024-10-28T13:20:55.641756Z","end":"2024-10-28T13:20:56.273404Z","steps":["trace[1575647690] 'process raft request'  (duration: 449.741748ms)","trace[1575647690] 'compare'  (duration: 181.070844ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T13:20:56.273504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T13:20:55.641717Z","time spent":"631.746293ms","remote":"127.0.0.1:47792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" mod_revision:919 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sxvfojjvtbbdpvplupb2rudl6q\" > >"}
	{"level":"warn","ts":"2024-10-28T13:20:56.840713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.114275ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T13:20:56.840776Z","caller":"traceutil/trace.go:171","msg":"trace[2028860530] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:928; }","duration":"382.223155ms","start":"2024-10-28T13:20:56.458540Z","end":"2024-10-28T13:20:56.840763Z","steps":["trace[2028860530] 'range keys from in-memory index tree'  (duration: 382.101462ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T13:21:32.968775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.976907ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451810 > lease_revoke:<id:46c992d342a53704>","response":"size:28"}
	{"level":"warn","ts":"2024-10-28T13:21:42.977847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.435309ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324141525882451870 > lease_revoke:<id:46c992d342a53741>","response":"size:28"}
	{"level":"info","ts":"2024-10-28T13:23:34.958335Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":812}
	{"level":"info","ts":"2024-10-28T13:23:34.967136Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":812,"took":"8.191531ms","hash":2313614649,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2621440,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-28T13:23:34.967222Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2313614649,"revision":812,"compact-revision":-1}
	{"level":"info","ts":"2024-10-28T13:28:34.969860Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1054}
	{"level":"info","ts":"2024-10-28T13:28:34.975592Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1054,"took":"5.402945ms","hash":3615048527,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-28T13:28:34.975674Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3615048527,"revision":1054,"compact-revision":812}
	{"level":"info","ts":"2024-10-28T13:33:34.977673Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1296}
	{"level":"info","ts":"2024-10-28T13:33:34.981044Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1296,"took":"3.059132ms","hash":3869304339,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-28T13:33:34.981091Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3869304339,"revision":1296,"compact-revision":1054}
	
	
	==> kernel <==
	 13:34:47 up 21 min,  0 users,  load average: 0.21, 0.16, 0.10
	Linux default-k8s-diff-port-783661 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c647572f5e66ab5eb4e382368e848c9f7e4de5135d3afdcc232d84cc7f02f1bc] <==
	I1028 13:31:37.134535       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:31:37.134618       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:33:36.134593       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:33:36.134923       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:33:37.136270       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:33:37.136329       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 13:33:37.136448       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:33:37.136542       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 13:33:37.137523       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:33:37.137592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 13:34:37.138536       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:34:37.138641       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 13:34:37.138744       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 13:34:37.138805       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 13:34:37.139771       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 13:34:37.140914       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [018b66943fe6da0f18517baddc0668fd0d05ce70cb342bf23b75419bf43c8835] <==
	E1028 13:29:39.731181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:29:40.283208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:29:50.563164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="217.644µs"
	I1028 13:30:02.561333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="74.16µs"
	E1028 13:30:09.738531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:30:10.291289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:30:39.744738       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:30:40.298929       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:31:09.750283       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:31:10.306545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:31:39.757310       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:31:40.313584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:32:09.764202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:32:10.321224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:32:39.770498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:32:40.328787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:33:09.776825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:33:10.335994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:33:39.782606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:33:40.343147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 13:34:09.789786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:34:10.350285       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 13:34:30.782781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-783661"
	E1028 13:34:39.794946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 13:34:40.357172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b44db812a04c728ad8cb1fb53b129eabfac45a027159f59c12587a843428d604] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 13:13:37.234447       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 13:13:37.245613       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.58"]
	E1028 13:13:37.245689       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 13:13:37.298486       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 13:13:37.298539       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 13:13:37.298570       1 server_linux.go:169] "Using iptables Proxier"
	I1028 13:13:37.300568       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 13:13:37.300783       1 server.go:483] "Version info" version="v1.31.2"
	I1028 13:13:37.300826       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:13:37.302437       1 config.go:199] "Starting service config controller"
	I1028 13:13:37.302472       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 13:13:37.302506       1 config.go:105] "Starting endpoint slice config controller"
	I1028 13:13:37.302510       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 13:13:37.303140       1 config.go:328] "Starting node config controller"
	I1028 13:13:37.303168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 13:13:37.402592       1 shared_informer.go:320] Caches are synced for service config
	I1028 13:13:37.402613       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 13:13:37.403224       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11560f139fa76c0c0f0957bc933d348160c8580ba271c2dd4898062bbee4d31f] <==
	I1028 13:13:33.805249       1 serving.go:386] Generated self-signed cert in-memory
	W1028 13:13:36.091805       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 13:13:36.093815       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 13:13:36.094299       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 13:13:36.094393       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 13:13:36.138238       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 13:13:36.138275       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 13:13:36.140348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 13:13:36.140521       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 13:13:36.140589       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 13:13:36.140666       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 13:13:36.241480       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 13:33:51 default-k8s-diff-port-783661 kubelet[911]: E1028 13:33:51.810053     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122431809778984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:33:51 default-k8s-diff-port-783661 kubelet[911]: E1028 13:33:51.810399     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122431809778984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:33:55 default-k8s-diff-port-783661 kubelet[911]: E1028 13:33:55.549629     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:34:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:01.812412     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122441812122699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:01 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:01.813240     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122441812122699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:08 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:08.548100     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:34:11 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:11.814207     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122451813998226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:11 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:11.814236     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122451813998226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:21 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:21.548812     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:34:21 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:21.815552     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122461815293410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:21 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:21.815595     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122461815293410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:31.561589     911 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:31.818657     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122471817896365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:31 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:31.818718     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122471817896365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:35 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:35.548932     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	Oct 28 13:34:41 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:41.819783     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122481819567007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:41 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:41.819820     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730122481819567007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 13:34:46 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:46.562918     911 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 28 13:34:46 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:46.562994     911 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 28 13:34:46 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:46.563270     911 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk96q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-rkx62_kube-system(31c37fb4-0650-481d-b1e3-4956769450d8): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 28 13:34:46 default-k8s-diff-port-783661 kubelet[911]: E1028 13:34:46.564770     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-rkx62" podUID="31c37fb4-0650-481d-b1e3-4956769450d8"
	
	
	==> storage-provisioner [390339ebf1058437dbdeba2fa840f98463d0cc954c0c29d054cae92218e8e053] <==
	I1028 13:14:07.809565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 13:14:07.821054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 13:14:07.821126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 13:14:07.831346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 13:14:07.831570       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927!
	I1028 13:14:07.838625       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"861f9f50-5b3b-41e4-b1fc-a29ba85cf992", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927 became leader
	I1028 13:14:07.932748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-783661_bccd7f5b-ea4f-4651-ae50-e0f4e0470927!
	
	
	==> storage-provisioner [dd70cdc4a68922f9a2ac6be58cfa3d6ed55ff71603131afdd8cc2a07781d775d] <==
	I1028 13:13:36.994915       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 13:14:06.997578       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rkx62
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62: exit status 1 (56.993066ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rkx62" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-783661 describe pod metrics-server-6867b74b74-rkx62: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.26s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 5.79
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 75.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 126.69
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.48
35 TestAddons/parallel/Registry 16.92
37 TestAddons/parallel/InspektorGadget 10.69
40 TestAddons/parallel/CSI 52.96
41 TestAddons/parallel/Headlamp 19.57
42 TestAddons/parallel/CloudSpanner 5.78
43 TestAddons/parallel/LocalPath 55.08
44 TestAddons/parallel/NvidiaDevicePlugin 6.62
45 TestAddons/parallel/Yakd 11.92
48 TestCertOptions 85.36
49 TestCertExpiration 255.92
51 TestForceSystemdFlag 45.64
52 TestForceSystemdEnv 62.49
54 TestKVMDriverInstallOrUpdate 4.1
58 TestErrorSpam/setup 37.5
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 4.45
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.18
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.14
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
75 TestFunctional/serial/CacheCmd/cache/add_local 1.88
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 34.15
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.27
87 TestFunctional/serial/InvalidService 4.51
89 TestFunctional/parallel/ConfigCmd 0.33
90 TestFunctional/parallel/DashboardCmd 11.45
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 7.47
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 40.96
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.43
103 TestFunctional/parallel/MySQL 26.97
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.51
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
113 TestFunctional/parallel/License 0.17
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.18
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
116 TestFunctional/parallel/ProfileCmd/profile_list 0.39
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
118 TestFunctional/parallel/MountCmd/any-port 7.84
119 TestFunctional/parallel/MountCmd/specific-port 1.68
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
121 TestFunctional/parallel/ServiceCmd/List 0.99
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
124 TestFunctional/parallel/ServiceCmd/Format 0.29
125 TestFunctional/parallel/Version/short 0.05
126 TestFunctional/parallel/Version/components 0.46
127 TestFunctional/parallel/ServiceCmd/URL 0.29
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.48
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.46
132 TestFunctional/parallel/ImageCommands/ImageBuild 9.27
133 TestFunctional/parallel/ImageCommands/Setup 1.51
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.01
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 190.15
160 TestMultiControlPlane/serial/DeployApp 6.15
161 TestMultiControlPlane/serial/PingHostFromPods 1.13
162 TestMultiControlPlane/serial/AddWorkerNode 53.38
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
165 TestMultiControlPlane/serial/CopyFile 12.76
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.46
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
174 TestMultiControlPlane/serial/RestartCluster 330.96
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
176 TestMultiControlPlane/serial/AddSecondaryNode 74.04
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 81.01
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.62
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 82.53
213 TestMountStart/serial/StartWithMountFirst 24.36
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 25.81
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.67
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 21.81
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 105.76
225 TestMultiNode/serial/DeployApp2Nodes 5.02
226 TestMultiNode/serial/PingHostFrom2Pods 0.75
227 TestMultiNode/serial/AddNode 46.35
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.54
230 TestMultiNode/serial/CopyFile 6.89
231 TestMultiNode/serial/StopNode 2.13
232 TestMultiNode/serial/StartAfterStop 38.36
234 TestMultiNode/serial/DeleteNode 2.14
236 TestMultiNode/serial/RestartMultiNode 175.84
237 TestMultiNode/serial/ValidateNameConflict 42.6
244 TestScheduledStopUnix 114.36
248 TestRunningBinaryUpgrade 191.85
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
262 TestPause/serial/Start 90.55
263 TestNoKubernetes/serial/StartWithK8s 88.28
264 TestNoKubernetes/serial/StartWithStopK8s 41.39
265 TestPause/serial/SecondStartNoReconfiguration 99.79
266 TestNoKubernetes/serial/Start 29.17
274 TestNetworkPlugins/group/false 5.85
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
279 TestNoKubernetes/serial/ProfileList 31.21
280 TestNoKubernetes/serial/Stop 1.7
281 TestPause/serial/Pause 1.03
282 TestPause/serial/VerifyStatus 0.3
283 TestPause/serial/Unpause 0.75
284 TestNoKubernetes/serial/StartNoArgs 48.93
285 TestPause/serial/PauseAgain 0.81
286 TestPause/serial/DeletePaused 1.57
287 TestPause/serial/VerifyDeletedResources 1.37
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
289 TestStoppedBinaryUpgrade/Setup 0.4
290 TestStoppedBinaryUpgrade/Upgrade 143.34
293 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
295 TestStartStop/group/no-preload/serial/FirstStart 96.3
297 TestStartStop/group/embed-certs/serial/FirstStart 56.59
298 TestStartStop/group/embed-certs/serial/DeployApp 9.26
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
301 TestStartStop/group/no-preload/serial/DeployApp 10.26
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
307 TestStartStop/group/embed-certs/serial/SecondStart 655.72
309 TestStartStop/group/no-preload/serial/SecondStart 570.62
310 TestStartStop/group/old-k8s-version/serial/Stop 4.28
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.51
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 577.68
327 TestStartStop/group/newest-cni/serial/FirstStart 45.51
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
330 TestStartStop/group/newest-cni/serial/Stop 11.32
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/newest-cni/serial/SecondStart 36.25
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
336 TestStartStop/group/newest-cni/serial/Pause 2.31
337 TestNetworkPlugins/group/auto/Start 79.45
338 TestNetworkPlugins/group/kindnet/Start 74.53
340 TestNetworkPlugins/group/calico/Start 94.66
341 TestNetworkPlugins/group/auto/KubeletFlags 0.24
342 TestNetworkPlugins/group/auto/NetCatPod 11.25
343 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
344 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
345 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
346 TestNetworkPlugins/group/auto/DNS 0.17
347 TestNetworkPlugins/group/auto/Localhost 0.14
348 TestNetworkPlugins/group/auto/HairPin 0.15
349 TestNetworkPlugins/group/kindnet/DNS 0.19
350 TestNetworkPlugins/group/kindnet/Localhost 0.15
351 TestNetworkPlugins/group/kindnet/HairPin 0.14
352 TestNetworkPlugins/group/custom-flannel/Start 67.25
353 TestNetworkPlugins/group/enable-default-cni/Start 100.93
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.24
356 TestNetworkPlugins/group/calico/NetCatPod 14.28
357 TestNetworkPlugins/group/calico/DNS 0.15
358 TestNetworkPlugins/group/calico/Localhost 0.14
359 TestNetworkPlugins/group/calico/HairPin 0.15
360 TestNetworkPlugins/group/flannel/Start 69.71
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
363 TestNetworkPlugins/group/custom-flannel/DNS 0.16
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
366 TestNetworkPlugins/group/bridge/Start 91.72
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
374 TestNetworkPlugins/group/flannel/NetCatPod 10.27
375 TestNetworkPlugins/group/flannel/DNS 0.15
376 TestNetworkPlugins/group/flannel/Localhost 0.12
377 TestNetworkPlugins/group/flannel/HairPin 0.11
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
379 TestNetworkPlugins/group/bridge/NetCatPod 11.21
380 TestNetworkPlugins/group/bridge/DNS 0.14
381 TestNetworkPlugins/group/bridge/Localhost 0.15
382 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (7.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-618409 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-618409 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.3178299s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 11:37:05.793990   84965 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1028 11:37:05.794118   84965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-618409
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-618409: exit status 85 (59.672697ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-618409 | jenkins | v1.34.0 | 28 Oct 24 11:36 UTC |          |
	|         | -p download-only-618409        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:36:58
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:36:58.516810   84977 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:36:58.516957   84977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:36:58.516968   84977 out.go:358] Setting ErrFile to fd 2...
	I1028 11:36:58.516975   84977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:36:58.517157   84977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	W1028 11:36:58.517305   84977 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19875-77800/.minikube/config/config.json: open /home/jenkins/minikube-integration/19875-77800/.minikube/config/config.json: no such file or directory
	I1028 11:36:58.517917   84977 out.go:352] Setting JSON to true
	I1028 11:36:58.518805   84977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4768,"bootTime":1730110650,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:36:58.518863   84977 start.go:139] virtualization: kvm guest
	I1028 11:36:58.521255   84977 out.go:97] [download-only-618409] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:36:58.521412   84977 notify.go:220] Checking for updates...
	W1028 11:36:58.521435   84977 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 11:36:58.522771   84977 out.go:169] MINIKUBE_LOCATION=19875
	I1028 11:36:58.524096   84977 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:36:58.525374   84977 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:36:58.526645   84977 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:36:58.527953   84977 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 11:36:58.531100   84977 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 11:36:58.531302   84977 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:36:58.566817   84977 out.go:97] Using the kvm2 driver based on user configuration
	I1028 11:36:58.566858   84977 start.go:297] selected driver: kvm2
	I1028 11:36:58.566869   84977 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:36:58.567196   84977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:36:58.567279   84977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:36:58.582179   84977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:36:58.582226   84977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:36:58.582750   84977 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 11:36:58.582907   84977 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 11:36:58.582939   84977 cni.go:84] Creating CNI manager for ""
	I1028 11:36:58.582998   84977 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:36:58.583010   84977 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 11:36:58.583067   84977 start.go:340] cluster config:
	{Name:download-only-618409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-618409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:36:58.583250   84977 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:36:58.584870   84977 out.go:97] Downloading VM boot image ...
	I1028 11:36:58.584913   84977 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:37:01.216080   84977 out.go:97] Starting "download-only-618409" primary control-plane node in "download-only-618409" cluster
	I1028 11:37:01.216109   84977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 11:37:01.242414   84977 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 11:37:01.242443   84977 cache.go:56] Caching tarball of preloaded images
	I1028 11:37:01.242632   84977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 11:37:01.244195   84977 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 11:37:01.244213   84977 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:37:01.270300   84977 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-618409 host does not exist
	  To start a cluster, run: "minikube start -p download-only-618409"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-618409
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-165595 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-165595 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.787090839s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 11:37:11.895664   84965 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1028 11:37:11.895721   84965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-165595
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-165595: exit status 85 (61.542571ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-618409 | jenkins | v1.34.0 | 28 Oct 24 11:36 UTC |                     |
	|         | -p download-only-618409        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| delete  | -p download-only-618409        | download-only-618409 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC | 28 Oct 24 11:37 UTC |
	| start   | -o=json --download-only        | download-only-165595 | jenkins | v1.34.0 | 28 Oct 24 11:37 UTC |                     |
	|         | -p download-only-165595        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:37:06
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:37:06.149131   85167 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:37:06.149221   85167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:06.149229   85167 out.go:358] Setting ErrFile to fd 2...
	I1028 11:37:06.149233   85167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:37:06.149387   85167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:37:06.149914   85167 out.go:352] Setting JSON to true
	I1028 11:37:06.150726   85167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4776,"bootTime":1730110650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:37:06.150821   85167 start.go:139] virtualization: kvm guest
	I1028 11:37:06.152750   85167 out.go:97] [download-only-165595] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:37:06.152877   85167 notify.go:220] Checking for updates...
	I1028 11:37:06.154198   85167 out.go:169] MINIKUBE_LOCATION=19875
	I1028 11:37:06.155597   85167 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:37:06.156855   85167 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:37:06.158150   85167 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:37:06.159264   85167 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 11:37:06.161450   85167 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 11:37:06.161719   85167 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:37:06.192769   85167 out.go:97] Using the kvm2 driver based on user configuration
	I1028 11:37:06.192790   85167 start.go:297] selected driver: kvm2
	I1028 11:37:06.192795   85167 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:37:06.193095   85167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:06.193171   85167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19875-77800/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:37:06.207617   85167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:37:06.207675   85167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:37:06.208380   85167 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 11:37:06.208591   85167 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 11:37:06.208626   85167 cni.go:84] Creating CNI manager for ""
	I1028 11:37:06.208712   85167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:37:06.208736   85167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 11:37:06.208794   85167 start.go:340] cluster config:
	{Name:download-only-165595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-165595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:37:06.208886   85167 iso.go:125] acquiring lock: {Name:mk63b0b51f7068da8478ead59802e85163a9315f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:37:06.210528   85167 out.go:97] Starting "download-only-165595" primary control-plane node in "download-only-165595" cluster
	I1028 11:37:06.210546   85167 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:06.237582   85167 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:37:06.237616   85167 cache.go:56] Caching tarball of preloaded images
	I1028 11:37:06.237766   85167 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:37:06.239535   85167 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 11:37:06.239550   85167 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:37:06.264536   85167 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:37:10.472109   85167 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:37:10.472201   85167 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19875-77800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-165595 host does not exist
	  To start a cluster, run: "minikube start -p download-only-165595"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-165595
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 11:37:12.453600   84965 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-029933 --alsologtostderr --binary-mirror http://127.0.0.1:41615 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-029933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-029933
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (75.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-255968 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-255968 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.608877869s)
helpers_test.go:175: Cleaning up "offline-crio-255968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-255968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-255968: (1.074769593s)
--- PASS: TestOffline (75.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-558164
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-558164: exit status 85 (51.180537ms)

                                                
                                                
-- stdout --
	* Profile "addons-558164" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-558164"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-558164
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-558164: exit status 85 (50.202925ms)

                                                
                                                
-- stdout --
	* Profile "addons-558164" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-558164"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (126.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-558164 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-558164 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.694540418s)
--- PASS: TestAddons/Setup (126.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-558164 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-558164 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-558164 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-558164 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0a0e12d7-e422-4b10-99ec-bb257d1f85e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0a0e12d7-e422-4b10-99ec-bb257d1f85e6] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004126476s
addons_test.go:633: (dbg) Run:  kubectl --context addons-558164 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-558164 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-558164 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.111606ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-knm9h" [ef5d7a78-4f98-44f2-8f1f-121ec2384ac3] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003035849s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6mfkq" [4c6c611d-0f32-46ff-b60d-db1ab8734769] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005020898s
addons_test.go:331: (dbg) Run:  kubectl --context addons-558164 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-558164 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-558164 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.193762881s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 ip
2024/10/28 11:39:59 [DEBUG] GET http://192.168.39.31:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nrlh8" [762f514e-532b-40cc-9bac-6c7db049b54b] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004489797s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable inspektor-gadget --alsologtostderr -v=1: (5.684081009s)
--- PASS: TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 11:40:00.521717   84965 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 11:40:00.525822   84965 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 11:40:00.525846   84965 kapi.go:107] duration metric: took 4.148406ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.157448ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-558164 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-558164 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d2232c1-1484-4c07-bf53-0ecf02c30460] Pending
helpers_test.go:344: "task-pv-pod" [3d2232c1-1484-4c07-bf53-0ecf02c30460] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d2232c1-1484-4c07-bf53-0ecf02c30460] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003872704s
addons_test.go:511: (dbg) Run:  kubectl --context addons-558164 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-558164 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-558164 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-558164 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-558164 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-558164 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
I1028 11:40:26.613752   84965 kapi.go:150] Service nginx in namespace default found.
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-558164 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f03a2b99-c5be-4b6a-b3a5-d89be3baae6d] Pending
helpers_test.go:344: "task-pv-pod-restore" [f03a2b99-c5be-4b6a-b3a5-d89be3baae6d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f03a2b99-c5be-4b6a-b3a5-d89be3baae6d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005331765s
addons_test.go:553: (dbg) Run:  kubectl --context addons-558164 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-558164 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-558164 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.648129434s)
--- PASS: TestAddons/parallel/CSI (52.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-558164 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-6gvq4" [5c575350-dba0-45b7-8a44-51e78cf330f2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-6gvq4" [5c575350-dba0-45b7-8a44-51e78cf330f2] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.006970127s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable headlamp --alsologtostderr -v=1: (5.713435485s)
--- PASS: TestAddons/parallel/Headlamp (19.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-z8lgs" [f9e34de0-37a2-4416-a918-ae23528bea95] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003267934s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-558164 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-558164 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [edf878b6-53e4-4160-87f3-ec2951b6438e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [edf878b6-53e4-4160-87f3-ec2951b6438e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [edf878b6-53e4-4160-87f3-ec2951b6438e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002774392s
addons_test.go:906: (dbg) Run:  kubectl --context addons-558164 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 ssh "cat /opt/local-path-provisioner/pvc-ebacc6ce-c961-47ab-93f4-2185834202e1_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-558164 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-558164 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.267835698s)
--- PASS: TestAddons/parallel/LocalPath (55.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tmgxz" [2222e84c-777d-4de9-a7d0-c0f8307c6df7] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006137654s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gtnln" [e99a132a-a0f5-4a8a-8094-bdb808c4ccfe] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003915207s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-558164 addons disable yakd --alsologtostderr -v=1: (5.917360676s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestCertOptions (85.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-764199 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-764199 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m23.882596582s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-764199 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-764199 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-764199 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-764199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-764199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-764199: (1.032947392s)
--- PASS: TestCertOptions (85.36s)

                                                
                                    
x
+
TestCertExpiration (255.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-717454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-717454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.494072177s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-717454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-717454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.369933287s)
helpers_test.go:175: Cleaning up "cert-expiration-717454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-717454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-717454: (1.054513526s)
--- PASS: TestCertExpiration (255.92s)

                                                
                                    
x
+
TestForceSystemdFlag (45.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-003088 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-003088 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.416217257s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-003088 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-003088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-003088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-003088: (1.027836167s)
--- PASS: TestForceSystemdFlag (45.64s)

                                                
                                    
x
+
TestForceSystemdEnv (62.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-261374 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-261374 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.420529425s)
helpers_test.go:175: Cleaning up "force-systemd-env-261374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-261374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-261374: (1.064708267s)
--- PASS: TestForceSystemdEnv (62.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1028 12:43:46.549611   84965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 12:43:46.549770   84965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1028 12:43:46.580942   84965 install.go:62] docker-machine-driver-kvm2: exit status 1
W1028 12:43:46.581360   84965 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 12:43:46.581441   84965 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3856143187/001/docker-machine-driver-kvm2
I1028 12:43:46.822063   84965 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3856143187/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000976e20 gz:0xc000976e28 tar:0xc000976dd0 tar.bz2:0xc000976de0 tar.gz:0xc000976df0 tar.xz:0xc000976e00 tar.zst:0xc000976e10 tbz2:0xc000976de0 tgz:0xc000976df0 txz:0xc000976e00 tzst:0xc000976e10 xz:0xc000976e30 zip:0xc000976e40 zst:0xc000976e38] Getters:map[file:0xc00189d890 http:0xc00002d860 https:0xc00002d8b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 12:43:46.822133   84965 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3856143187/001/docker-machine-driver-kvm2
I1028 12:43:48.981974   84965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 12:43:48.982070   84965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1028 12:43:49.009432   84965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1028 12:43:49.009495   84965 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1028 12:43:49.009575   84965 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 12:43:49.009612   84965 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3856143187/002/docker-machine-driver-kvm2
I1028 12:43:49.059521   84965 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3856143187/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000976e20 gz:0xc000976e28 tar:0xc000976dd0 tar.bz2:0xc000976de0 tar.gz:0xc000976df0 tar.xz:0xc000976e00 tar.zst:0xc000976e10 tbz2:0xc000976de0 tgz:0xc000976df0 txz:0xc000976e00 tzst:0xc000976e10 xz:0xc000976e30 zip:0xc000976e40 zst:0xc000976e38] Getters:map[file:0xc000864d10 http:0xc000732f00 https:0xc000732f50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1028 12:43:49.059560   84965 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3856143187/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.10s)

                                                
                                    
x
+
TestErrorSpam/setup (37.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-088372 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088372 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-088372 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088372 --driver=kvm2  --container-runtime=crio: (37.497340283s)
--- PASS: TestErrorSpam/setup (37.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 pause
E1028 11:49:20.376834   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.383245   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.394658   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.416054   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.457534   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.539098   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:20.700503   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 pause
E1028 11:49:21.021829   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 unpause
E1028 11:49:21.663938   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 unpause
E1028 11:49:22.945571   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (4.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop: (1.545196663s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop
E1028 11:49:25.507788   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop: (1.410597567s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-088372 --log_dir /tmp/nospam-088372 stop: (1.491787946s)
--- PASS: TestErrorSpam/stop (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19875-77800/.minikube/files/etc/test/nested/copy/84965/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1028 11:49:30.629355   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:49:40.871571   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:50:01.353077   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:50:42.315837   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-665758 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.180825257s)
--- PASS: TestFunctional/serial/StartWithProxy (84.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 11:50:52.326214   84965 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-665758 --alsologtostderr -v=8: (32.13583462s)
functional_test.go:663: soft start took 32.136550146s for "functional-665758" cluster.
I1028 11:51:24.462406   84965 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (32.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-665758 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:3.1: (1.029382955s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:3.3: (1.264335045s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 cache add registry.k8s.io/pause:latest: (1.094722398s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-665758 /tmp/TestFunctionalserialCacheCmdcacheadd_local3314548266/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache add minikube-local-cache-test:functional-665758
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 cache add minikube-local-cache-test:functional-665758: (1.576549019s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache delete minikube-local-cache-test:functional-665758
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-665758
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.568314ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 kubectl -- --context functional-665758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-665758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1028 11:52:04.237231   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-665758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.147504776s)
functional_test.go:761: restart took 34.147639381s for "functional-665758" cluster.
I1028 11:52:06.250224   84965 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-665758 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 logs: (1.351395075s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 logs --file /tmp/TestFunctionalserialLogsFileCmd685100921/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 logs --file /tmp/TestFunctionalserialLogsFileCmd685100921/001/logs.txt: (1.268812253s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-665758 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-665758
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-665758: exit status 115 (270.350001ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.154:31547 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-665758 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-665758 delete -f testdata/invalidsvc.yaml: (1.045052596s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 config get cpus: exit status 14 (64.293557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 config get cpus: exit status 14 (49.197165ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-665758 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-665758 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 93060: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-665758 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.806134ms)

                                                
                                                
-- stdout --
	* [functional-665758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:52:16.228184   92926 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:16.228302   92926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:16.228312   92926 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:16.228319   92926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:16.228532   92926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:16.229065   92926 out.go:352] Setting JSON to false
	I1028 11:52:16.230031   92926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5686,"bootTime":1730110650,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:16.230139   92926 start.go:139] virtualization: kvm guest
	I1028 11:52:16.231952   92926 out.go:177] * [functional-665758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:16.233663   92926 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:16.233676   92926 notify.go:220] Checking for updates...
	I1028 11:52:16.236059   92926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:16.237242   92926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:16.238359   92926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:16.239656   92926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:16.240772   92926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:16.242114   92926 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:52:16.242513   92926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:16.242562   92926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:16.257521   92926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35693
	I1028 11:52:16.257962   92926 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:16.258581   92926 main.go:141] libmachine: Using API Version  1
	I1028 11:52:16.258604   92926 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:16.259024   92926 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:16.259243   92926 main.go:141] libmachine: (functional-665758) Calling .DriverName
	I1028 11:52:16.259490   92926 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:16.259860   92926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:16.259911   92926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:16.275203   92926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
	I1028 11:52:16.275803   92926 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:16.276344   92926 main.go:141] libmachine: Using API Version  1
	I1028 11:52:16.276367   92926 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:16.276783   92926 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:16.276985   92926 main.go:141] libmachine: (functional-665758) Calling .DriverName
	I1028 11:52:16.314733   92926 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:52:16.315848   92926 start.go:297] selected driver: kvm2
	I1028 11:52:16.315867   92926 start.go:901] validating driver "kvm2" against &{Name:functional-665758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-665758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:16.316008   92926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:16.318322   92926 out.go:201] 
	W1028 11:52:16.319542   92926 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 11:52:16.320564   92926 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-665758 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-665758 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (154.529921ms)

                                                
                                                
-- stdout --
	* [functional-665758] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:52:16.077858   92875 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:52:16.078004   92875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:16.078018   92875 out.go:358] Setting ErrFile to fd 2...
	I1028 11:52:16.078027   92875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:52:16.078408   92875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 11:52:16.079162   92875 out.go:352] Setting JSON to false
	I1028 11:52:16.080497   92875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5686,"bootTime":1730110650,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:52:16.080639   92875 start.go:139] virtualization: kvm guest
	I1028 11:52:16.082916   92875 out.go:177] * [functional-665758] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 11:52:16.084490   92875 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:52:16.084507   92875 notify.go:220] Checking for updates...
	I1028 11:52:16.087530   92875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:52:16.089045   92875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 11:52:16.090284   92875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 11:52:16.091530   92875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:52:16.092790   92875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:52:16.094443   92875 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:52:16.094866   92875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:16.094923   92875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:16.111099   92875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I1028 11:52:16.111651   92875 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:16.112448   92875 main.go:141] libmachine: Using API Version  1
	I1028 11:52:16.112489   92875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:16.112920   92875 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:16.113053   92875 main.go:141] libmachine: (functional-665758) Calling .DriverName
	I1028 11:52:16.113291   92875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:52:16.113706   92875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:52:16.113751   92875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:52:16.134771   92875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1028 11:52:16.135297   92875 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:52:16.135920   92875 main.go:141] libmachine: Using API Version  1
	I1028 11:52:16.135948   92875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:52:16.136430   92875 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:52:16.136651   92875 main.go:141] libmachine: (functional-665758) Calling .DriverName
	I1028 11:52:16.169541   92875 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1028 11:52:16.170896   92875 start.go:297] selected driver: kvm2
	I1028 11:52:16.170917   92875 start.go:901] validating driver "kvm2" against &{Name:functional-665758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-665758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:52:16.171055   92875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:52:16.173421   92875 out.go:201] 
	W1028 11:52:16.174651   92875 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 11:52:16.175814   92875 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-665758 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-665758 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-94rzt" [dfbf9b2d-ea9d-46b8-9c13-ebf7c2876097] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2024/10/28 11:52:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-94rzt" [dfbf9b2d-ea9d-46b8-9c13-ebf7c2876097] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.0046182s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.154:30224
functional_test.go:1675: http://192.168.39.154:30224: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-94rzt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.154:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.154:30224
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5f9360a0-6130-496a-bf97-f7b371af0510] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003874513s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-665758 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-665758 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-665758 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-665758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ad263b5f-4e07-4980-898d-2b3bd347ff06] Pending
helpers_test.go:344: "sp-pod" [ad263b5f-4e07-4980-898d-2b3bd347ff06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ad263b5f-4e07-4980-898d-2b3bd347ff06] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003814329s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-665758 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-665758 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-665758 delete -f testdata/storage-provisioner/pod.yaml: (5.171199706s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-665758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01d0916e-8394-45d1-84ce-feda39a741a8] Pending
helpers_test.go:344: "sp-pod" [01d0916e-8394-45d1-84ce-feda39a741a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [01d0916e-8394-45d1-84ce-feda39a741a8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003757647s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-665758 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh -n functional-665758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cp functional-665758:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1484095857/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh -n functional-665758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh -n functional-665758 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-665758 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-xrpdr" [19b93a9e-048e-47a5-bf91-07ed8b342ae5] Pending
helpers_test.go:344: "mysql-6cdb49bbb-xrpdr" [19b93a9e-048e-47a5-bf91-07ed8b342ae5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-xrpdr" [19b93a9e-048e-47a5-bf91-07ed8b342ae5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003316321s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-665758 exec mysql-6cdb49bbb-xrpdr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-665758 exec mysql-6cdb49bbb-xrpdr -- mysql -ppassword -e "show databases;": exit status 1 (126.146022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:52:53.946552   84965 retry.go:31] will retry after 674.933594ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-665758 exec mysql-6cdb49bbb-xrpdr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-665758 exec mysql-6cdb49bbb-xrpdr -- mysql -ppassword -e "show databases;": exit status 1 (124.269309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:52:54.746485   84965 retry.go:31] will retry after 1.732738196s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-665758 exec mysql-6cdb49bbb-xrpdr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/84965/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /etc/test/nested/copy/84965/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/84965.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /etc/ssl/certs/84965.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/84965.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /usr/share/ca-certificates/84965.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/849652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /etc/ssl/certs/849652.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/849652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /usr/share/ca-certificates/849652.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-665758 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "sudo systemctl is-active docker": exit status 1 (206.110615ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "sudo systemctl is-active containerd": exit status 1 (205.349736ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-665758 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-665758 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ckw4p" [a5a61d07-dc56-4c3b-9ffc-ec6131434636] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ckw4p" [a5a61d07-dc56-4c3b-9ffc-ec6131434636] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00313537s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "338.491474ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.428026ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "388.701293ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "74.083539ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdany-port2946950401/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730116334879063669" to /tmp/TestFunctionalparallelMountCmdany-port2946950401/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730116334879063669" to /tmp/TestFunctionalparallelMountCmdany-port2946950401/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730116334879063669" to /tmp/TestFunctionalparallelMountCmdany-port2946950401/001/test-1730116334879063669
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.522357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:52:15.139890   84965 retry.go:31] will retry after 702.394561ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 11:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 11:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 11:52 test-1730116334879063669
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh cat /mount-9p/test-1730116334879063669
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-665758 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fcf15e32-ea1f-4b37-862c-08e798a5b06d] Pending
helpers_test.go:344: "busybox-mount" [fcf15e32-ea1f-4b37-862c-08e798a5b06d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fcf15e32-ea1f-4b37-862c-08e798a5b06d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fcf15e32-ea1f-4b37-862c-08e798a5b06d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004376929s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-665758 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdany-port2946950401/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdspecific-port1377354150/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (222.612548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:52:22.944068   84965 retry.go:31] will retry after 372.939044ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdspecific-port1377354150/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "sudo umount -f /mount-9p": exit status 1 (248.393995ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-665758 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdspecific-port1377354150/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T" /mount1: exit status 1 (301.228837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:52:24.702324   84965 retry.go:31] will retry after 629.941166ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-665758 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-665758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1769857450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service list -o json
functional_test.go:1494: Took "481.954221ms" to run "out/minikube-linux-amd64 -p functional-665758 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.154:30546
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.154:30546
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-665758 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-665758
localhost/kicbase/echo-server:functional-665758
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-665758 image ls --format short --alsologtostderr:
I1028 11:52:38.090171   94719 out.go:345] Setting OutFile to fd 1 ...
I1028 11:52:38.090407   94719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.090437   94719 out.go:358] Setting ErrFile to fd 2...
I1028 11:52:38.090453   94719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.090777   94719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
I1028 11:52:38.091463   94719 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.091583   94719 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.091991   94719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.092046   94719 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.109416   94719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
I1028 11:52:38.110215   94719 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.110865   94719 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.110899   94719 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.111269   94719 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.111480   94719 main.go:141] libmachine: (functional-665758) Calling .GetState
I1028 11:52:38.113955   94719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.114013   94719 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.129973   94719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
I1028 11:52:38.130455   94719 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.130985   94719 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.131004   94719 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.131508   94719 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.131684   94719 main.go:141] libmachine: (functional-665758) Calling .DriverName
I1028 11:52:38.131879   94719 ssh_runner.go:195] Run: systemctl --version
I1028 11:52:38.131929   94719 main.go:141] libmachine: (functional-665758) Calling .GetSSHHostname
I1028 11:52:38.134807   94719 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.135327   94719 main.go:141] libmachine: (functional-665758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:02:31", ip: ""} in network mk-functional-665758: {Iface:virbr1 ExpiryTime:2024-10-28 12:49:41 +0000 UTC Type:0 Mac:52:54:00:67:02:31 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-665758 Clientid:01:52:54:00:67:02:31}
I1028 11:52:38.135390   94719 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined IP address 192.168.39.154 and MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.135655   94719 main.go:141] libmachine: (functional-665758) Calling .GetSSHPort
I1028 11:52:38.135848   94719 main.go:141] libmachine: (functional-665758) Calling .GetSSHKeyPath
I1028 11:52:38.135981   94719 main.go:141] libmachine: (functional-665758) Calling .GetSSHUsername
I1028 11:52:38.136220   94719 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/functional-665758/id_rsa Username:docker}
I1028 11:52:38.262686   94719 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:52:38.488892   94719 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.488907   94719 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.489248   94719 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:38.489278   94719 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.489291   94719 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:52:38.489307   94719 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.489318   94719 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.489547   94719 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.489559   94719 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-665758 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-665758  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-665758  | 1a1413746ce00 | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-665758 image ls --format table --alsologtostderr:
I1028 11:52:39.037020   94843 out.go:345] Setting OutFile to fd 1 ...
I1028 11:52:39.037325   94843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:39.037337   94843 out.go:358] Setting ErrFile to fd 2...
I1028 11:52:39.037345   94843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:39.037618   94843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
I1028 11:52:39.038448   94843 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:39.038593   94843 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:39.039184   94843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:39.039237   94843 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:39.054890   94843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
I1028 11:52:39.055406   94843 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:39.056115   94843 main.go:141] libmachine: Using API Version  1
I1028 11:52:39.056142   94843 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:39.056550   94843 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:39.056727   94843 main.go:141] libmachine: (functional-665758) Calling .GetState
I1028 11:52:39.058749   94843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:39.058811   94843 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:39.073745   94843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
I1028 11:52:39.074232   94843 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:39.074883   94843 main.go:141] libmachine: Using API Version  1
I1028 11:52:39.074906   94843 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:39.075306   94843 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:39.075552   94843 main.go:141] libmachine: (functional-665758) Calling .DriverName
I1028 11:52:39.075811   94843 ssh_runner.go:195] Run: systemctl --version
I1028 11:52:39.075847   94843 main.go:141] libmachine: (functional-665758) Calling .GetSSHHostname
I1028 11:52:39.078879   94843 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:39.079338   94843 main.go:141] libmachine: (functional-665758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:02:31", ip: ""} in network mk-functional-665758: {Iface:virbr1 ExpiryTime:2024-10-28 12:49:41 +0000 UTC Type:0 Mac:52:54:00:67:02:31 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-665758 Clientid:01:52:54:00:67:02:31}
I1028 11:52:39.079367   94843 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined IP address 192.168.39.154 and MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:39.079612   94843 main.go:141] libmachine: (functional-665758) Calling .GetSSHPort
I1028 11:52:39.079814   94843 main.go:141] libmachine: (functional-665758) Calling .GetSSHKeyPath
I1028 11:52:39.080001   94843 main.go:141] libmachine: (functional-665758) Calling .GetSSHUsername
I1028 11:52:39.080151   94843 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/functional-665758/id_rsa Username:docker}
I1028 11:52:39.234144   94843 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:52:39.315832   94843 main.go:141] libmachine: Making call to close driver server
I1028 11:52:39.315853   94843 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:39.316144   94843 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:39.316163   94843 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:52:39.316162   94843 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:39.316172   94843 main.go:141] libmachine: Making call to close driver server
I1028 11:52:39.316181   94843 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:39.316369   94843 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:39.316387   94843 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-665758 image ls --format json --alsologtostderr:
[{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfb
e4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/
library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-665758"],"size":"4943877"},{"id":"1a1413746ce00fb0ceaccff6d1ab1b3e289615a67d33ab2ad4b40f6749f4cc95","repoDigests":["localhost/minikube-local-cache-test@sha256
:5f8ba9f54e93e0310ee4d2207cad7879cd5d870991b19634693ffb86a5af2166"],"repoTags":["localhost/minikube-local-cache-test:functional-665758"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8c
fa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c4
19e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io
/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-665758 image ls --format json --alsologtostderr:
I1028 11:52:38.554841   94772 out.go:345] Setting OutFile to fd 1 ...
I1028 11:52:38.554993   94772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.555004   94772 out.go:358] Setting ErrFile to fd 2...
I1028 11:52:38.555010   94772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.555269   94772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
I1028 11:52:38.558920   94772 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.559042   94772 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.559400   94772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.559454   94772 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.575320   94772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
I1028 11:52:38.575780   94772 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.576488   94772 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.576513   94772 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.576955   94772 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.577170   94772 main.go:141] libmachine: (functional-665758) Calling .GetState
I1028 11:52:38.579156   94772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.579205   94772 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.594324   94772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
I1028 11:52:38.594814   94772 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.595392   94772 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.595409   94772 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.595835   94772 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.596004   94772 main.go:141] libmachine: (functional-665758) Calling .DriverName
I1028 11:52:38.596222   94772 ssh_runner.go:195] Run: systemctl --version
I1028 11:52:38.596252   94772 main.go:141] libmachine: (functional-665758) Calling .GetSSHHostname
I1028 11:52:38.599043   94772 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.599399   94772 main.go:141] libmachine: (functional-665758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:02:31", ip: ""} in network mk-functional-665758: {Iface:virbr1 ExpiryTime:2024-10-28 12:49:41 +0000 UTC Type:0 Mac:52:54:00:67:02:31 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-665758 Clientid:01:52:54:00:67:02:31}
I1028 11:52:38.599423   94772 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined IP address 192.168.39.154 and MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.599680   94772 main.go:141] libmachine: (functional-665758) Calling .GetSSHPort
I1028 11:52:38.599869   94772 main.go:141] libmachine: (functional-665758) Calling .GetSSHKeyPath
I1028 11:52:38.600032   94772 main.go:141] libmachine: (functional-665758) Calling .GetSSHUsername
I1028 11:52:38.600150   94772 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/functional-665758/id_rsa Username:docker}
I1028 11:52:38.724355   94772 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:52:38.935412   94772 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.935430   94772 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.935728   94772 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:38.935748   94772 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.935765   94772 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:52:38.935779   94772 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.935787   94772 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.936023   94772 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.936051   94772 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-665758 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 1a1413746ce00fb0ceaccff6d1ab1b3e289615a67d33ab2ad4b40f6749f4cc95
repoDigests:
- localhost/minikube-local-cache-test@sha256:5f8ba9f54e93e0310ee4d2207cad7879cd5d870991b19634693ffb86a5af2166
repoTags:
- localhost/minikube-local-cache-test:functional-665758
size: "3330"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-665758
size: "4943877"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-665758 image ls --format yaml --alsologtostderr:
I1028 11:52:38.100207   94720 out.go:345] Setting OutFile to fd 1 ...
I1028 11:52:38.100506   94720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.100521   94720 out.go:358] Setting ErrFile to fd 2...
I1028 11:52:38.100528   94720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.100826   94720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
I1028 11:52:38.101665   94720 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.101821   94720 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.102431   94720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.102519   94720 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.118465   94720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
I1028 11:52:38.119162   94720 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.119795   94720 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.119820   94720 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.120283   94720 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.120517   94720 main.go:141] libmachine: (functional-665758) Calling .GetState
I1028 11:52:38.122667   94720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.122726   94720 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.138166   94720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
I1028 11:52:38.138628   94720 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.139150   94720 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.139193   94720 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.139514   94720 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.139710   94720 main.go:141] libmachine: (functional-665758) Calling .DriverName
I1028 11:52:38.139901   94720 ssh_runner.go:195] Run: systemctl --version
I1028 11:52:38.139928   94720 main.go:141] libmachine: (functional-665758) Calling .GetSSHHostname
I1028 11:52:38.142441   94720 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.142754   94720 main.go:141] libmachine: (functional-665758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:02:31", ip: ""} in network mk-functional-665758: {Iface:virbr1 ExpiryTime:2024-10-28 12:49:41 +0000 UTC Type:0 Mac:52:54:00:67:02:31 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-665758 Clientid:01:52:54:00:67:02:31}
I1028 11:52:38.142783   94720 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined IP address 192.168.39.154 and MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.142978   94720 main.go:141] libmachine: (functional-665758) Calling .GetSSHPort
I1028 11:52:38.143162   94720 main.go:141] libmachine: (functional-665758) Calling .GetSSHKeyPath
I1028 11:52:38.143328   94720 main.go:141] libmachine: (functional-665758) Calling .GetSSHUsername
I1028 11:52:38.143496   94720 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/functional-665758/id_rsa Username:docker}
I1028 11:52:38.265070   94720 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:52:38.492969   94720 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.492991   94720 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.493234   94720 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.493253   94720 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:52:38.493253   94720 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:38.493264   94720 main.go:141] libmachine: Making call to close driver server
I1028 11:52:38.493273   94720 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:38.493503   94720 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:38.493514   94720 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:38.493518   94720 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-665758 ssh pgrep buildkitd: exit status 1 (253.624711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image build -t localhost/my-image:functional-665758 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 image build -t localhost/my-image:functional-665758 testdata/build --alsologtostderr: (8.765278468s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-665758 image build -t localhost/my-image:functional-665758 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7129fba3648
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-665758
--> 31a80663be9
Successfully tagged localhost/my-image:functional-665758
31a80663be90585cbfa947c8333928617b9915ec178813f0b0814577bb5240bf
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-665758 image build -t localhost/my-image:functional-665758 testdata/build --alsologtostderr:
I1028 11:52:38.797207   94819 out.go:345] Setting OutFile to fd 1 ...
I1028 11:52:38.797324   94819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.797334   94819 out.go:358] Setting ErrFile to fd 2...
I1028 11:52:38.797340   94819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:52:38.797512   94819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
I1028 11:52:38.798135   94819 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.798749   94819 config.go:182] Loaded profile config "functional-665758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:52:38.799160   94819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.799209   94819 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.814198   94819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
I1028 11:52:38.814731   94819 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.815316   94819 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.815339   94819 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.815747   94819 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.816021   94819 main.go:141] libmachine: (functional-665758) Calling .GetState
I1028 11:52:38.818001   94819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:52:38.818059   94819 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:52:38.832564   94819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
I1028 11:52:38.832998   94819 main.go:141] libmachine: () Calling .GetVersion
I1028 11:52:38.833464   94819 main.go:141] libmachine: Using API Version  1
I1028 11:52:38.833485   94819 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:52:38.833786   94819 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:52:38.834004   94819 main.go:141] libmachine: (functional-665758) Calling .DriverName
I1028 11:52:38.834213   94819 ssh_runner.go:195] Run: systemctl --version
I1028 11:52:38.834244   94819 main.go:141] libmachine: (functional-665758) Calling .GetSSHHostname
I1028 11:52:38.837335   94819 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.837780   94819 main.go:141] libmachine: (functional-665758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:02:31", ip: ""} in network mk-functional-665758: {Iface:virbr1 ExpiryTime:2024-10-28 12:49:41 +0000 UTC Type:0 Mac:52:54:00:67:02:31 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-665758 Clientid:01:52:54:00:67:02:31}
I1028 11:52:38.837817   94819 main.go:141] libmachine: (functional-665758) DBG | domain functional-665758 has defined IP address 192.168.39.154 and MAC address 52:54:00:67:02:31 in network mk-functional-665758
I1028 11:52:38.837943   94819 main.go:141] libmachine: (functional-665758) Calling .GetSSHPort
I1028 11:52:38.838099   94819 main.go:141] libmachine: (functional-665758) Calling .GetSSHKeyPath
I1028 11:52:38.838246   94819 main.go:141] libmachine: (functional-665758) Calling .GetSSHUsername
I1028 11:52:38.838424   94819 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/functional-665758/id_rsa Username:docker}
I1028 11:52:38.958082   94819 build_images.go:161] Building image from path: /tmp/build.1679741344.tar
I1028 11:52:38.958154   94819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 11:52:38.992240   94819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1679741344.tar
I1028 11:52:39.012311   94819 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1679741344.tar: stat -c "%s %y" /var/lib/minikube/build/build.1679741344.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1679741344.tar': No such file or directory
I1028 11:52:39.012348   94819 ssh_runner.go:362] scp /tmp/build.1679741344.tar --> /var/lib/minikube/build/build.1679741344.tar (3072 bytes)
I1028 11:52:39.068127   94819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1679741344
I1028 11:52:39.092759   94819 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1679741344 -xf /var/lib/minikube/build/build.1679741344.tar
I1028 11:52:39.109661   94819 crio.go:315] Building image: /var/lib/minikube/build/build.1679741344
I1028 11:52:39.109721   94819 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-665758 /var/lib/minikube/build/build.1679741344 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1028 11:52:47.448315   94819 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-665758 /var/lib/minikube/build/build.1679741344 --cgroup-manager=cgroupfs: (8.338570541s)
I1028 11:52:47.448389   94819 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1679741344
I1028 11:52:47.461932   94819 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1679741344.tar
I1028 11:52:47.473245   94819 build_images.go:217] Built localhost/my-image:functional-665758 from /tmp/build.1679741344.tar
I1028 11:52:47.473290   94819 build_images.go:133] succeeded building to: functional-665758
I1028 11:52:47.473297   94819 build_images.go:134] failed building to: 
I1028 11:52:47.473329   94819 main.go:141] libmachine: Making call to close driver server
I1028 11:52:47.473343   94819 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:47.473611   94819 main.go:141] libmachine: (functional-665758) DBG | Closing plugin on server side
I1028 11:52:47.473635   94819 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:47.473650   94819 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:52:47.473665   94819 main.go:141] libmachine: Making call to close driver server
I1028 11:52:47.473676   94819 main.go:141] libmachine: (functional-665758) Calling .Close
I1028 11:52:47.473947   94819 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:52:47.473962   94819 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.491379601s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-665758
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image load --daemon kicbase/echo-server:functional-665758 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 image load --daemon kicbase/echo-server:functional-665758 --alsologtostderr: (1.173458795s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image load --daemon kicbase/echo-server:functional-665758 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-665758
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image load --daemon kicbase/echo-server:functional-665758 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-665758 image load --daemon kicbase/echo-server:functional-665758 --alsologtostderr: (3.04013333s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image save kicbase/echo-server:functional-665758 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image rm kicbase/echo-server:functional-665758 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-665758
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-665758 image save --daemon kicbase/echo-server:functional-665758 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-665758
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-665758
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-665758
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-665758
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-273199 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 11:54:20.376905   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:54:48.079427   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-273199 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.504149087s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-273199 -- rollout status deployment/busybox: (4.032808435s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-8tvkk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-fnvwg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-g54mk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-8tvkk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-fnvwg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-g54mk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-8tvkk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-fnvwg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-g54mk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-8tvkk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-8tvkk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-fnvwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-fnvwg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-g54mk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-273199 -- exec busybox-7dff88458-g54mk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-273199 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-273199 -v=7 --alsologtostderr: (52.562698934s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-273199 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp testdata/cp-test.txt ha-273199:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199:/home/docker/cp-test.txt ha-273199-m02:/home/docker/cp-test_ha-273199_ha-273199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test_ha-273199_ha-273199-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199:/home/docker/cp-test.txt ha-273199-m03:/home/docker/cp-test_ha-273199_ha-273199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test_ha-273199_ha-273199-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199:/home/docker/cp-test.txt ha-273199-m04:/home/docker/cp-test_ha-273199_ha-273199-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test_ha-273199_ha-273199-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp testdata/cp-test.txt ha-273199-m02:/home/docker/cp-test.txt
E1028 11:57:13.448753   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:13.455160   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:13.466501   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:13.487953   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:13.529394   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test.txt"
E1028 11:57:13.610842   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:13.772264   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test.txt"
E1028 11:57:14.094243   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m02:/home/docker/cp-test.txt ha-273199:/home/docker/cp-test_ha-273199-m02_ha-273199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test_ha-273199-m02_ha-273199.txt"
E1028 11:57:14.736536   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m02:/home/docker/cp-test.txt ha-273199-m03:/home/docker/cp-test_ha-273199-m02_ha-273199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test_ha-273199-m02_ha-273199-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m02:/home/docker/cp-test.txt ha-273199-m04:/home/docker/cp-test_ha-273199-m02_ha-273199-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test.txt"
E1028 11:57:16.018419   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test_ha-273199-m02_ha-273199-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp testdata/cp-test.txt ha-273199-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt ha-273199:/home/docker/cp-test_ha-273199-m03_ha-273199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test_ha-273199-m03_ha-273199.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt ha-273199-m02:/home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test_ha-273199-m03_ha-273199-m02.txt"
E1028 11:57:18.580147   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m03:/home/docker/cp-test.txt ha-273199-m04:/home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test_ha-273199-m03_ha-273199-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp testdata/cp-test.txt ha-273199-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3536995069/001/cp-test_ha-273199-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt ha-273199:/home/docker/cp-test_ha-273199-m04_ha-273199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199 "sudo cat /home/docker/cp-test_ha-273199-m04_ha-273199.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt ha-273199-m02:/home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m02 "sudo cat /home/docker/cp-test_ha-273199-m04_ha-273199-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 cp ha-273199-m04:/home/docker/cp-test.txt ha-273199-m03:/home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 ssh -n ha-273199-m03 "sudo cat /home/docker/cp-test_ha-273199-m04_ha-273199-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-273199 node delete m03 -v=7 --alsologtostderr: (15.756986583s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (330.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-273199 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 12:09:20.376916   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:12:13.449007   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:13:36.512342   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-273199 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m30.208967912s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (330.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-273199 --control-plane -v=7 --alsologtostderr
E1028 12:14:20.375806   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-273199 --control-plane -v=7 --alsologtostderr: (1m13.213098477s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-273199 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-870554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-870554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.013675357s)
--- PASS: TestJSONOutput/start/Command (81.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-870554 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-870554 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-870554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-870554 --output=json --user=testUser: (6.619661973s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-117792 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-117792 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.071252ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"15a9d8ba-fb0a-475e-9a23-b98770d3be19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-117792] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a09fdd62-d301-4003-a92c-9b26ddafbd0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"d0bebb1c-8d46-43b5-bd28-2139e5aea079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9ff17561-3eec-4f30-8d52-c7f7f7c0f951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig"}}
	{"specversion":"1.0","id":"46749dde-1b30-4f6d-b3d7-7cbdfcfa0044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube"}}
	{"specversion":"1.0","id":"70fb58e2-e542-45ac-9230-40d2293ec7d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c2b696eb-ddc1-4f83-a400-816a902c68bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e2e42df6-b9a1-4865-a5ff-2d83824afe4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-117792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-117792
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (82.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-630418 --driver=kvm2  --container-runtime=crio
E1028 12:17:13.451013   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-630418 --driver=kvm2  --container-runtime=crio: (37.382326153s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-641713 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-641713 --driver=kvm2  --container-runtime=crio: (42.348885004s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-630418
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-641713
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-641713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-641713
helpers_test.go:175: Cleaning up "first-630418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-630418
--- PASS: TestMinikubeProfile (82.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-509682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-509682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.364754341s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-509682 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-509682 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-525050 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1028 12:19:20.376865   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-525050 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.808326532s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-509682 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-525050
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-525050: (1.270978158s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-525050
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-525050: (20.809545831s)
--- PASS: TestMountStart/serial/RestartStopped (21.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525050 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-363277 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-363277 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.370163149s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-363277 -- rollout status deployment/busybox: (3.59650717s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-dxgwj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-rj4n2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-dxgwj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-rj4n2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-dxgwj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-rj4n2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-dxgwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-dxgwj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-rj4n2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-363277 -- exec busybox-7dff88458-rj4n2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-363277 -v 3 --alsologtostderr
E1028 12:22:13.449724   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:22:23.443833   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-363277 -v 3 --alsologtostderr: (45.82001685s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-363277 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp testdata/cp-test.txt multinode-363277:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277:/home/docker/cp-test.txt multinode-363277-m02:/home/docker/cp-test_multinode-363277_multinode-363277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test_multinode-363277_multinode-363277-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277:/home/docker/cp-test.txt multinode-363277-m03:/home/docker/cp-test_multinode-363277_multinode-363277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test_multinode-363277_multinode-363277-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp testdata/cp-test.txt multinode-363277-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt multinode-363277:/home/docker/cp-test_multinode-363277-m02_multinode-363277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test_multinode-363277-m02_multinode-363277.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m02:/home/docker/cp-test.txt multinode-363277-m03:/home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test_multinode-363277-m02_multinode-363277-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp testdata/cp-test.txt multinode-363277-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4154964164/001/cp-test_multinode-363277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt multinode-363277:/home/docker/cp-test_multinode-363277-m03_multinode-363277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277 "sudo cat /home/docker/cp-test_multinode-363277-m03_multinode-363277.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 cp multinode-363277-m03:/home/docker/cp-test.txt multinode-363277-m02:/home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 ssh -n multinode-363277-m02 "sudo cat /home/docker/cp-test_multinode-363277-m03_multinode-363277-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 node stop m03: (1.349422875s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-363277 status: exit status 7 (391.349213ms)

                                                
                                                
-- stdout --
	multinode-363277
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-363277-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-363277-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr: exit status 7 (390.857791ms)

                                                
                                                
-- stdout --
	multinode-363277
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-363277-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-363277-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:22:34.815239  112237 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:22:34.815554  112237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:22:34.815570  112237 out.go:358] Setting ErrFile to fd 2...
	I1028 12:22:34.815576  112237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:22:34.815834  112237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:22:34.815999  112237 out.go:352] Setting JSON to false
	I1028 12:22:34.816031  112237 mustload.go:65] Loading cluster: multinode-363277
	I1028 12:22:34.816102  112237 notify.go:220] Checking for updates...
	I1028 12:22:34.816923  112237 config.go:182] Loaded profile config "multinode-363277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:22:34.816962  112237 status.go:174] checking status of multinode-363277 ...
	I1028 12:22:34.818070  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:34.818130  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:34.833796  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I1028 12:22:34.834236  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:34.834800  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:34.834823  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:34.835207  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:34.835422  112237 main.go:141] libmachine: (multinode-363277) Calling .GetState
	I1028 12:22:34.836844  112237 status.go:371] multinode-363277 host status = "Running" (err=<nil>)
	I1028 12:22:34.836862  112237 host.go:66] Checking if "multinode-363277" exists ...
	I1028 12:22:34.837156  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:34.837189  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:34.851928  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I1028 12:22:34.852352  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:34.852824  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:34.852843  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:34.853143  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:34.853301  112237 main.go:141] libmachine: (multinode-363277) Calling .GetIP
	I1028 12:22:34.855799  112237 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:22:34.856163  112237 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:22:34.856187  112237 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:22:34.856326  112237 host.go:66] Checking if "multinode-363277" exists ...
	I1028 12:22:34.856645  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:34.856694  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:34.871506  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I1028 12:22:34.871974  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:34.872415  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:34.872440  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:34.872780  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:34.872940  112237 main.go:141] libmachine: (multinode-363277) Calling .DriverName
	I1028 12:22:34.873113  112237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:22:34.873134  112237 main.go:141] libmachine: (multinode-363277) Calling .GetSSHHostname
	I1028 12:22:34.875513  112237 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:22:34.875952  112237 main.go:141] libmachine: (multinode-363277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:1c:5e", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:01 +0000 UTC Type:0 Mac:52:54:00:e8:1c:5e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-363277 Clientid:01:52:54:00:e8:1c:5e}
	I1028 12:22:34.875976  112237 main.go:141] libmachine: (multinode-363277) DBG | domain multinode-363277 has defined IP address 192.168.39.174 and MAC address 52:54:00:e8:1c:5e in network mk-multinode-363277
	I1028 12:22:34.876077  112237 main.go:141] libmachine: (multinode-363277) Calling .GetSSHPort
	I1028 12:22:34.876221  112237 main.go:141] libmachine: (multinode-363277) Calling .GetSSHKeyPath
	I1028 12:22:34.876385  112237 main.go:141] libmachine: (multinode-363277) Calling .GetSSHUsername
	I1028 12:22:34.876542  112237 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277/id_rsa Username:docker}
	I1028 12:22:34.949807  112237 ssh_runner.go:195] Run: systemctl --version
	I1028 12:22:34.955657  112237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:34.968489  112237 kubeconfig.go:125] found "multinode-363277" server: "https://192.168.39.174:8443"
	I1028 12:22:34.968517  112237 api_server.go:166] Checking apiserver status ...
	I1028 12:22:34.968549  112237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:22:34.979909  112237 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1061/cgroup
	W1028 12:22:34.988056  112237 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:22:34.988104  112237 ssh_runner.go:195] Run: ls
	I1028 12:22:34.991856  112237 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1028 12:22:34.995745  112237 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I1028 12:22:34.995765  112237 status.go:463] multinode-363277 apiserver status = Running (err=<nil>)
	I1028 12:22:34.995777  112237 status.go:176] multinode-363277 status: &{Name:multinode-363277 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 12:22:34.995817  112237 status.go:174] checking status of multinode-363277-m02 ...
	I1028 12:22:34.996199  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:34.996264  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:35.011492  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I1028 12:22:35.011950  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:35.012450  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:35.012481  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:35.012781  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:35.012969  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetState
	I1028 12:22:35.014290  112237 status.go:371] multinode-363277-m02 host status = "Running" (err=<nil>)
	I1028 12:22:35.014308  112237 host.go:66] Checking if "multinode-363277-m02" exists ...
	I1028 12:22:35.014641  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:35.014677  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:35.029833  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I1028 12:22:35.030234  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:35.030744  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:35.030771  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:35.031069  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:35.031249  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetIP
	I1028 12:22:35.033847  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | domain multinode-363277-m02 has defined MAC address 52:54:00:53:ef:7b in network mk-multinode-363277
	I1028 12:22:35.034226  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:7b", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:58 +0000 UTC Type:0 Mac:52:54:00:53:ef:7b Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-363277-m02 Clientid:01:52:54:00:53:ef:7b}
	I1028 12:22:35.034269  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | domain multinode-363277-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:53:ef:7b in network mk-multinode-363277
	I1028 12:22:35.034401  112237 host.go:66] Checking if "multinode-363277-m02" exists ...
	I1028 12:22:35.034697  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:35.034741  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:35.048905  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I1028 12:22:35.049275  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:35.049758  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:35.049781  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:35.050087  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:35.050291  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .DriverName
	I1028 12:22:35.050479  112237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:22:35.050500  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetSSHHostname
	I1028 12:22:35.053080  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | domain multinode-363277-m02 has defined MAC address 52:54:00:53:ef:7b in network mk-multinode-363277
	I1028 12:22:35.053484  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:ef:7b", ip: ""} in network mk-multinode-363277: {Iface:virbr1 ExpiryTime:2024-10-28 13:20:58 +0000 UTC Type:0 Mac:52:54:00:53:ef:7b Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-363277-m02 Clientid:01:52:54:00:53:ef:7b}
	I1028 12:22:35.053513  112237 main.go:141] libmachine: (multinode-363277-m02) DBG | domain multinode-363277-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:53:ef:7b in network mk-multinode-363277
	I1028 12:22:35.053631  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetSSHPort
	I1028 12:22:35.053790  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetSSHKeyPath
	I1028 12:22:35.053953  112237 main.go:141] libmachine: (multinode-363277-m02) Calling .GetSSHUsername
	I1028 12:22:35.054103  112237 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19875-77800/.minikube/machines/multinode-363277-m02/id_rsa Username:docker}
	I1028 12:22:35.129855  112237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:35.142309  112237 status.go:176] multinode-363277-m02 status: &{Name:multinode-363277-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 12:22:35.142337  112237 status.go:174] checking status of multinode-363277-m03 ...
	I1028 12:22:35.142668  112237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:22:35.142704  112237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:22:35.157750  112237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
	I1028 12:22:35.158110  112237 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:22:35.158552  112237 main.go:141] libmachine: Using API Version  1
	I1028 12:22:35.158574  112237 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:22:35.158893  112237 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:22:35.159074  112237 main.go:141] libmachine: (multinode-363277-m03) Calling .GetState
	I1028 12:22:35.160378  112237 status.go:371] multinode-363277-m03 host status = "Stopped" (err=<nil>)
	I1028 12:22:35.160390  112237 status.go:384] host is not running, skipping remaining checks
	I1028 12:22:35.160395  112237 status.go:176] multinode-363277-m03 status: &{Name:multinode-363277-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 node start m03 -v=7 --alsologtostderr: (37.770893497s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-363277 node delete m03: (1.644578673s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (175.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-363277 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 12:32:13.451272   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-363277 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m55.320889402s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-363277 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (175.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-363277
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-363277-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-363277-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.828001ms)

                                                
                                                
-- stdout --
	* [multinode-363277-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-363277-m02' is duplicated with machine name 'multinode-363277-m02' in profile 'multinode-363277'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-363277-m03 --driver=kvm2  --container-runtime=crio
E1028 12:34:20.376939   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-363277-m03 --driver=kvm2  --container-runtime=crio: (41.497078725s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-363277
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-363277: exit status 80 (205.939423ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-363277 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-363277-m03 already exists in multinode-363277-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-363277-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.60s)

                                                
                                    
x
+
TestScheduledStopUnix (114.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-377617 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-377617 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.776041733s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377617 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-377617 -n scheduled-stop-377617
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377617 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 12:40:04.091287   84965 retry.go:31] will retry after 107.571µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.092440   84965 retry.go:31] will retry after 109.186µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.093580   84965 retry.go:31] will retry after 289.717µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.094708   84965 retry.go:31] will retry after 341.473µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.095827   84965 retry.go:31] will retry after 606.605µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.096977   84965 retry.go:31] will retry after 1.064125ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.098116   84965 retry.go:31] will retry after 1.46189ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.100323   84965 retry.go:31] will retry after 859.31µs: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.101458   84965 retry.go:31] will retry after 3.184821ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.105652   84965 retry.go:31] will retry after 4.355434ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.110863   84965 retry.go:31] will retry after 6.755361ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.118060   84965 retry.go:31] will retry after 5.081782ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.123229   84965 retry.go:31] will retry after 13.823554ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.137456   84965 retry.go:31] will retry after 18.467214ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
I1028 12:40:04.156706   84965 retry.go:31] will retry after 30.564143ms: open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/scheduled-stop-377617/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377617 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377617 -n scheduled-stop-377617
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-377617
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377617 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-377617
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-377617: exit status 7 (67.21138ms)

                                                
                                                
-- stdout --
	scheduled-stop-377617
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377617 -n scheduled-stop-377617
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377617 -n scheduled-stop-377617: exit status 7 (64.924506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-377617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-377617
--- PASS: TestScheduledStopUnix (114.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1671747121 start -p running-upgrade-331593 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1028 12:42:13.449037   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1671747121 start -p running-upgrade-331593 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m58.45644919s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-331593 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-331593 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.573259258s)
helpers_test.go:175: Cleaning up "running-upgrade-331593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-331593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-331593: (1.266108864s)
--- PASS: TestRunningBinaryUpgrade (191.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (83.941825ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-394868] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (90.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-747750 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-747750 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.548364728s)
--- PASS: TestPause/serial/Start (90.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (88.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394868 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394868 --driver=kvm2  --container-runtime=crio: (1m28.027212616s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-394868 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (88.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.131584094s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-394868 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-394868 status -o json: exit status 2 (219.637978ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-394868","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-394868
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-394868: (1.037665538s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (99.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-747750 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-747750 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.754591568s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (99.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394868 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.165663929s)
--- PASS: TestNoKubernetes/serial/Start (29.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-297280 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-297280 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (136.023505ms)

                                                
                                                
-- stdout --
	* [false-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:43:37.422799  121825 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:43:37.423108  121825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:43:37.423121  121825 out.go:358] Setting ErrFile to fd 2...
	I1028 12:43:37.423127  121825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:43:37.423408  121825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19875-77800/.minikube/bin
	I1028 12:43:37.424185  121825 out.go:352] Setting JSON to false
	I1028 12:43:37.425513  121825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8767,"bootTime":1730110650,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:43:37.425663  121825 start.go:139] virtualization: kvm guest
	I1028 12:43:37.428249  121825 out.go:177] * [false-297280] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:43:37.429818  121825 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:43:37.429819  121825 notify.go:220] Checking for updates...
	I1028 12:43:37.432389  121825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:43:37.433599  121825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19875-77800/kubeconfig
	I1028 12:43:37.434860  121825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19875-77800/.minikube
	I1028 12:43:37.436117  121825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:43:37.437318  121825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:43:37.439232  121825 config.go:182] Loaded profile config "NoKubernetes-394868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1028 12:43:37.439438  121825 config.go:182] Loaded profile config "pause-747750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:43:37.439587  121825 config.go:182] Loaded profile config "running-upgrade-331593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:43:37.439764  121825 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:43:37.485961  121825 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:43:37.487307  121825 start.go:297] selected driver: kvm2
	I1028 12:43:37.487324  121825 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:43:37.487350  121825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:43:37.489563  121825 out.go:201] 
	W1028 12:43:37.490975  121825 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1028 12:43:37.492247  121825 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-297280 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.35:8443
name: pause-747750
contexts:
- context:
cluster: pause-747750
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-747750
name: pause-747750
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-747750
user:
client-certificate: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.crt
client-key: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-297280

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-297280"

                                                
                                                
----------------------- debugLogs end: false-297280 [took: 5.567486418s] --------------------------------
helpers_test.go:175: Cleaning up "false-297280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-297280
--- PASS: TestNetworkPlugins/group/false (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-394868 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-394868 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.747589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1028 12:44:20.376101   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (28.814850997s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.393575777s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-394868
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-394868: (1.704841461s)
--- PASS: TestNoKubernetes/serial/Stop (1.70s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-747750 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-747750 --alsologtostderr -v=5: (1.029370483s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-747750 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-747750 --output=json --layout=cluster: exit status 2 (298.655757ms)

                                                
                                                
-- stdout --
	{"Name":"pause-747750","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-747750","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-747750 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (48.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394868 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394868 --driver=kvm2  --container-runtime=crio: (48.926316329s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (48.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-747750 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-747750 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-747750 --alsologtostderr -v=5: (1.565790162s)
--- PASS: TestPause/serial/DeletePaused (1.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.373413372s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-394868 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-394868 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.158344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3120335372 start -p stopped-upgrade-232896 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3120335372 start -p stopped-upgrade-232896 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m32.144806455s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3120335372 -p stopped-upgrade-232896 stop
E1028 12:46:56.518000   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3120335372 -p stopped-upgrade-232896 stop: (11.49344108s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-232896 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1028 12:47:13.448847   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-232896 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.70245284s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-232896
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-702694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-702694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m36.302768404s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-818470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-818470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (56.589154725s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-818470 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a72ba1e9-b5c3-4421-9700-a209c394cbe0] Pending
helpers_test.go:344: "busybox" [a72ba1e9-b5c3-4421-9700-a209c394cbe0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a72ba1e9-b5c3-4421-9700-a209c394cbe0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004626659s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-818470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-818470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-818470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-702694 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f9a11ba-2e9c-4423-8d11-bb22717f8088] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1028 12:49:20.376153   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5f9a11ba-2e9c-4423-8d11-bb22717f8088] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003508186s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-702694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-702694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-702694 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (655.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-818470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-818470 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m55.475346706s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818470 -n embed-certs-818470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (655.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (570.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-702694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 12:52:13.451725   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-702694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m30.360854907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-702694 -n no-preload-702694
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (570.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-733464 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-733464 --alsologtostderr -v=3: (4.283815903s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-733464 -n old-k8s-version-733464: exit status 7 (63.939912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-733464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-783661 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-783661 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (53.509253651s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f19d0ea-554f-4583-897a-132f6a43d88b] Pending
helpers_test.go:344: "busybox" [5f19d0ea-554f-4583-897a-132f6a43d88b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5f19d0ea-554f-4583-897a-132f6a43d88b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003970281s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-783661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-783661 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (577.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-783661 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-783661 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m37.415510178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-783661 -n default-k8s-diff-port-783661
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (577.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051506 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051506 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (45.514664133s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.227285465s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-051506 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-051506 --alsologtostderr -v=3: (11.322369895s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051506 -n newest-cni-051506
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051506 -n newest-cni-051506: exit status 7 (65.985584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-051506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051506 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 13:17:13.448989   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/functional-665758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051506 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (35.972883194s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051506 -n newest-cni-051506
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051506 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-051506 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051506 -n newest-cni-051506
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051506 -n newest-cni-051506: exit status 2 (246.620202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051506 -n newest-cni-051506
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051506 -n newest-cni-051506: exit status 2 (234.467824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-051506 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051506 -n newest-cni-051506
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051506 -n newest-cni-051506
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.454009984s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.528607528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.661005129s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-297280 "pgrep -a kubelet"
I1028 13:18:57.084242   84965 config.go:182] Loaded profile config "auto-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7dwtw" [b6598a81-5d78-4429-9c86-272be297e003] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7dwtw" [b6598a81-5d78-4429-9c86-272be297e003] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004485583s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2hpr5" [2d1667d3-5a8e-42d7-bac0-b8405c9ab6e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004318157s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-297280 "pgrep -a kubelet"
I1028 13:19:06.853374   84965 config.go:182] Loaded profile config "kindnet-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mjjb7" [970841c0-5325-4ffd-9b8e-efd816989967] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mjjb7" [970841c0-5325-4ffd-9b8e-efd816989967] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004696769s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1028 13:19:24.554327   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:19:29.676739   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m7.250099925s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1028 13:19:39.918272   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.931396688s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xmszc" [1434f840-916d-4b90-9a4b-afdfc2dd18b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005070736s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-297280 "pgrep -a kubelet"
I1028 13:19:51.826441   84965 config.go:182] Loaded profile config "calico-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vfgpj" [9c199a21-2d54-4b0b-b854-61e0f077584c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vfgpj" [9c199a21-2d54-4b0b-b854-61e0f077584c] Running
E1028 13:20:00.399892   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003669954s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1028 13:20:29.543003   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m9.71085491s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-297280 "pgrep -a kubelet"
I1028 13:20:31.713506   84965 config.go:182] Loaded profile config "custom-flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gvtqx" [36c45343-ef97-4e88-aab3-d78b48bc63e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gvtqx" [36c45343-ef97-4e88-aab3-d78b48bc63e9] Running
E1028 13:20:41.361458   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004189585s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-297280 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m31.720223029s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-297280 "pgrep -a kubelet"
I1028 13:21:15.342693   84965 config.go:182] Loaded profile config "enable-default-cni-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmnhd" [86826f0c-fda3-4d29-bc12-8f2ac01fa5ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmnhd" [86826f0c-fda3-4d29-bc12-8f2ac01fa5ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004220515s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6h2h6" [0b8ac290-d97c-486d-ac3f-c727bd74a862] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004670032s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-297280 "pgrep -a kubelet"
I1028 13:21:39.100611   84965 config.go:182] Loaded profile config "flannel-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-flb28" [c4ecf9df-c166-47d4-a840-285bc24207e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-flb28" [c4ecf9df-c166-47d4-a840-285bc24207e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004345185s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-297280 "pgrep -a kubelet"
I1028 13:22:31.606993   84965 config.go:182] Loaded profile config "bridge-297280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-297280 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lgk6q" [7e3685f9-8e35-4229-8f0b-31276c251361] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lgk6q" [7e3685f9-8e35-4229-8f0b-31276c251361] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006094756s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-297280 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-297280 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E1028 13:23:57.319682   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.326024   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.337403   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.358808   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.400916   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.482307   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.643900   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:57.965655   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:58.607925   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:23:59.889298   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.645727   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.652124   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.663471   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.684789   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.726195   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.807661   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:00.969231   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:01.291013   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:01.933072   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:02.451249   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:03.214508   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:05.776336   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:07.573270   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:10.898308   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:17.815597   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:19.422107   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:20.376730   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/addons-558164/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:21.139819   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:38.297689   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:41.621543   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.584277   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.590638   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.602117   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.623486   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.664873   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.746327   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:45.907862   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:46.229547   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:46.871004   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:47.124559   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/no-preload-702694/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:48.153324   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:50.715272   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:24:55.836783   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:06.078100   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:09.048273   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:19.259618   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:22.583418   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:26.559872   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:31.928070   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:31.934484   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:31.945819   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:31.967123   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:32.008478   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:32.089926   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:32.251469   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:32.573164   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:33.215227   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:34.496641   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:36.749532   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/old-k8s-version-733464/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:37.058170   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:42.179491   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:25:52.420967   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:07.522143   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/calico-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:12.902974   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.596822   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.603203   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.614522   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.635844   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.677188   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.758662   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:15.920203   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:16.241970   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:16.883455   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:18.165123   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:20.726783   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:25.848613   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:32.861974   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:32.868331   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:32.879702   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:32.901110   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:32.942507   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:33.023986   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:33.185555   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:33.507017   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:34.149085   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:35.431265   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:36.090794   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:37.992564   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:41.181980   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/auto-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:43.114412   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:44.505212   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/kindnet-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:53.356002   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:53.865134   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/custom-flannel-297280/client.crt: no such file or directory" logger="UnhandledError"
E1028 13:26:56.572117   84965 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/enable-default-cni-297280/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (39/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.27
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.14
269 TestNetworkPlugins/group/kubenet 3.88
277 TestNetworkPlugins/group/cilium 3.34
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-558164 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-213407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-213407
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-297280 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.35:8443
name: pause-747750
contexts:
- context:
cluster: pause-747750
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-747750
name: pause-747750
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-747750
user:
client-certificate: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.crt
client-key: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-297280

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-297280"

                                                
                                                
----------------------- debugLogs end: kubenet-297280 [took: 3.679376769s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-297280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-297280
--- SKIP: TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-297280 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-297280" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19875-77800/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.35:8443
name: pause-747750
contexts:
- context:
cluster: pause-747750
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:42:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-747750
name: pause-747750
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-747750
user:
client-certificate: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.crt
client-key: /home/jenkins/minikube-integration/19875-77800/.minikube/profiles/pause-747750/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-297280

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-297280" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-297280"

                                                
                                                
----------------------- debugLogs end: cilium-297280 [took: 3.193551714s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-297280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-297280
--- SKIP: TestNetworkPlugins/group/cilium (3.34s)

                                                
                                    
Copied to clipboard